00:00:00.000 Started by upstream project "autotest-per-patch" build number 132320 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.034 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.060 Using shallow fetch with depth 1 00:00:00.060 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.060 > git --version # timeout=10 00:00:00.090 > git --version # 'git version 2.39.2' 00:00:00.090 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.123 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.123 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.724 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.738 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.751 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.751 > git config core.sparsecheckout # timeout=10 00:00:03.765 > git read-tree -mu HEAD # timeout=10 00:00:03.784 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.806 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.807 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.905 [Pipeline] Start of Pipeline 00:00:03.919 [Pipeline] library 00:00:03.920 Loading library shm_lib@master 00:00:03.920 Library shm_lib@master is cached. Copying from home. 00:00:03.934 [Pipeline] node 00:00:03.947 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.949 [Pipeline] { 00:00:03.959 [Pipeline] catchError 00:00:03.960 [Pipeline] { 00:00:03.974 [Pipeline] wrap 00:00:03.983 [Pipeline] { 00:00:03.992 [Pipeline] stage 00:00:03.994 [Pipeline] { (Prologue) 00:00:04.012 [Pipeline] echo 00:00:04.013 Node: VM-host-SM17 00:00:04.019 [Pipeline] cleanWs 00:00:04.029 [WS-CLEANUP] Deleting project workspace... 00:00:04.029 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.035 [WS-CLEANUP] done 00:00:04.299 [Pipeline] setCustomBuildProperty 00:00:04.383 [Pipeline] httpRequest 00:00:04.675 [Pipeline] echo 00:00:04.677 Sorcerer 10.211.164.20 is alive 00:00:04.687 [Pipeline] retry 00:00:04.689 [Pipeline] { 00:00:04.706 [Pipeline] httpRequest 00:00:04.710 HttpMethod: GET 00:00:04.711 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.711 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.724 Response Code: HTTP/1.1 200 OK 00:00:04.725 Success: Status code 200 is in the accepted range: 200,404 00:00:04.726 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.929 [Pipeline] } 00:00:11.948 [Pipeline] // retry 00:00:11.956 [Pipeline] sh 00:00:12.238 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.255 [Pipeline] httpRequest 00:00:12.642 [Pipeline] echo 00:00:12.644 Sorcerer 10.211.164.20 is alive 00:00:12.655 [Pipeline] retry 00:00:12.658 [Pipeline] { 00:00:12.673 [Pipeline] httpRequest 00:00:12.678 HttpMethod: GET 00:00:12.679 URL: http://10.211.164.20/packages/spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:00:12.680 Sending request to url: http://10.211.164.20/packages/spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:00:12.681 Response Code: HTTP/1.1 200 OK 00:00:12.681 Success: Status code 200 is in the accepted range: 200,404 00:00:12.682 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:00:45.500 [Pipeline] } 00:00:45.518 [Pipeline] // retry 00:00:45.526 [Pipeline] sh 00:00:45.809 + tar --no-same-owner -xf spdk_fc96810c2908a9f503c6238994746205c3fdd19e.tar.gz 00:00:48.358 [Pipeline] sh 00:00:48.641 + git -C spdk log --oneline -n5 00:00:48.641 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:00:48.641 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:00:48.641 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:00:48.641 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:00:48.641 d47eb51c9 bdev: fix a race between reset start and complete 00:00:48.662 [Pipeline] writeFile 00:00:48.678 [Pipeline] sh 00:00:48.961 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:48.973 [Pipeline] sh 00:00:49.256 + cat autorun-spdk.conf 00:00:49.256 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.256 SPDK_RUN_ASAN=1 00:00:49.256 SPDK_RUN_UBSAN=1 00:00:49.256 SPDK_TEST_RAID=1 00:00:49.256 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.264 RUN_NIGHTLY=0 00:00:49.266 [Pipeline] } 00:00:49.281 [Pipeline] // stage 00:00:49.298 [Pipeline] stage 00:00:49.300 [Pipeline] { (Run VM) 00:00:49.315 [Pipeline] sh 00:00:49.600 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:49.600 + echo 'Start stage prepare_nvme.sh' 00:00:49.600 Start stage prepare_nvme.sh 00:00:49.600 + [[ -n 6 ]] 00:00:49.600 + disk_prefix=ex6 00:00:49.600 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:49.600 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:49.600 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:49.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.600 ++ SPDK_RUN_ASAN=1 00:00:49.600 ++ SPDK_RUN_UBSAN=1 00:00:49.600 ++ SPDK_TEST_RAID=1 00:00:49.600 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.600 ++ RUN_NIGHTLY=0 00:00:49.600 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:49.600 + nvme_files=() 00:00:49.600 + declare -A nvme_files 00:00:49.600 + backend_dir=/var/lib/libvirt/images/backends 00:00:49.600 + nvme_files['nvme.img']=5G 00:00:49.600 + nvme_files['nvme-cmb.img']=5G 00:00:49.600 + nvme_files['nvme-multi0.img']=4G 00:00:49.600 + nvme_files['nvme-multi1.img']=4G 00:00:49.600 + nvme_files['nvme-multi2.img']=4G 00:00:49.600 + nvme_files['nvme-openstack.img']=8G 00:00:49.600 + nvme_files['nvme-zns.img']=5G 00:00:49.600 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:49.600 + (( SPDK_TEST_FTL == 1 )) 00:00:49.600 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:49.600 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.600 + for nvme in "${!nvme_files[@]}" 00:00:49.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:49.600 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.600 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:49.600 + echo 'End stage prepare_nvme.sh' 00:00:49.600 End stage prepare_nvme.sh 00:00:49.611 [Pipeline] sh 00:00:49.893 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:49.893 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:00:49.893 00:00:49.893 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:49.893 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:49.893 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:49.893 HELP=0 00:00:49.893 DRY_RUN=0 00:00:49.893 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:49.893 NVME_DISKS_TYPE=nvme,nvme, 00:00:49.893 NVME_AUTO_CREATE=0 00:00:49.893 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:49.893 NVME_CMB=,, 00:00:49.893 NVME_PMR=,, 00:00:49.893 NVME_ZNS=,, 00:00:49.893 NVME_MS=,, 00:00:49.893 NVME_FDP=,, 00:00:49.893 SPDK_VAGRANT_DISTRO=fedora39 00:00:49.893 SPDK_VAGRANT_VMCPU=10 00:00:49.893 SPDK_VAGRANT_VMRAM=12288 00:00:49.893 SPDK_VAGRANT_PROVIDER=libvirt 00:00:49.893 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:49.893 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:49.893 SPDK_OPENSTACK_NETWORK=0 00:00:49.893 VAGRANT_PACKAGE_BOX=0 00:00:49.893 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:49.893 FORCE_DISTRO=true 00:00:49.893 VAGRANT_BOX_VERSION= 00:00:49.893 EXTRA_VAGRANTFILES= 00:00:49.893 NIC_MODEL=e1000 00:00:49.893 00:00:49.893 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:49.893 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:52.429 Bringing machine 'default' up with 'libvirt' provider... 00:00:52.998 ==> default: Creating image (snapshot of base box volume). 00:00:52.998 ==> default: Creating domain with the following settings... 00:00:52.998 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732010106_62e46ee91e98f1c3b3c3 00:00:52.998 ==> default: -- Domain type: kvm 00:00:52.998 ==> default: -- Cpus: 10 00:00:52.998 ==> default: -- Feature: acpi 00:00:52.998 ==> default: -- Feature: apic 00:00:52.998 ==> default: -- Feature: pae 00:00:52.998 ==> default: -- Memory: 12288M 00:00:52.998 ==> default: -- Memory Backing: hugepages: 00:00:52.998 ==> default: -- Management MAC: 00:00:52.998 ==> default: -- Loader: 00:00:52.998 ==> default: -- Nvram: 00:00:52.998 ==> default: -- Base box: spdk/fedora39 00:00:52.998 ==> default: -- Storage pool: default 00:00:52.998 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732010106_62e46ee91e98f1c3b3c3.img (20G) 00:00:52.998 ==> default: -- Volume Cache: default 00:00:52.998 ==> default: -- Kernel: 00:00:52.998 ==> default: -- Initrd: 00:00:52.998 ==> default: -- Graphics Type: vnc 00:00:52.998 ==> default: -- Graphics Port: -1 00:00:52.998 ==> default: -- Graphics IP: 127.0.0.1 00:00:52.998 ==> default: -- Graphics Password: Not defined 00:00:52.998 ==> default: -- Video Type: cirrus 00:00:52.998 ==> default: -- Video VRAM: 9216 00:00:52.998 ==> default: -- Sound Type: 00:00:52.998 ==> default: -- Keymap: en-us 00:00:52.998 ==> default: -- TPM Path: 00:00:52.998 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:52.998 ==> default: -- Command line args: 00:00:52.998 ==> default: -> value=-device, 00:00:52.998 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:52.998 ==> default: -> value=-drive, 00:00:52.998 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:52.998 ==> default: -> value=-device, 00:00:52.998 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.998 ==> default: -> value=-device, 00:00:52.998 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:52.998 ==> default: -> value=-drive, 00:00:52.998 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:52.998 ==> default: -> value=-device, 00:00:52.998 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.998 ==> default: -> value=-drive, 00:00:52.998 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:52.998 ==> default: -> value=-device, 00:00:52.998 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.998 ==> default: -> value=-drive, 00:00:52.998 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:52.998 ==> default: -> value=-device, 00:00:52.998 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.257 ==> default: Creating shared folders metadata... 00:00:53.257 ==> default: Starting domain. 00:00:54.635 ==> default: Waiting for domain to get an IP address... 00:01:12.742 ==> default: Waiting for SSH to become available... 00:01:12.742 ==> default: Configuring and enabling network interfaces... 00:01:16.032 default: SSH address: 192.168.121.202:22 00:01:16.032 default: SSH username: vagrant 00:01:16.032 default: SSH auth method: private key 00:01:17.936 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:26.061 ==> default: Mounting SSHFS shared folder... 00:01:27.967 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:27.967 ==> default: Checking Mount.. 00:01:29.347 ==> default: Folder Successfully Mounted! 00:01:29.347 ==> default: Running provisioner: file... 00:01:30.284 default: ~/.gitconfig => .gitconfig 00:01:30.543 00:01:30.543 SUCCESS! 00:01:30.543 00:01:30.543 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:30.543 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.543 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:30.543 00:01:30.553 [Pipeline] } 00:01:30.569 [Pipeline] // stage 00:01:30.579 [Pipeline] dir 00:01:30.580 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:30.581 [Pipeline] { 00:01:30.594 [Pipeline] catchError 00:01:30.596 [Pipeline] { 00:01:30.609 [Pipeline] sh 00:01:30.890 + vagrant ssh-config --host vagrant 00:01:30.890 + sed -ne /^Host/,$p 00:01:30.890 + tee ssh_conf 00:01:34.179 Host vagrant 00:01:34.179 HostName 192.168.121.202 00:01:34.179 User vagrant 00:01:34.179 Port 22 00:01:34.179 UserKnownHostsFile /dev/null 00:01:34.179 StrictHostKeyChecking no 00:01:34.179 PasswordAuthentication no 00:01:34.179 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:34.179 IdentitiesOnly yes 00:01:34.179 LogLevel FATAL 00:01:34.179 ForwardAgent yes 00:01:34.179 ForwardX11 yes 00:01:34.179 00:01:34.193 [Pipeline] withEnv 00:01:34.195 [Pipeline] { 00:01:34.209 [Pipeline] sh 00:01:34.490 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:34.490 source /etc/os-release 00:01:34.490 [[ -e /image.version ]] && img=$(< /image.version) 00:01:34.490 # Minimal, systemd-like check. 00:01:34.490 if [[ -e /.dockerenv ]]; then 00:01:34.490 # Clear garbage from the node's name: 00:01:34.490 # agt-er_autotest_547-896 -> autotest_547-896 00:01:34.490 # $HOSTNAME is the actual container id 00:01:34.490 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:34.490 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:34.490 # We can assume this is a mount from a host where container is running, 00:01:34.490 # so fetch its hostname to easily identify the target swarm worker. 00:01:34.490 container="$(< /etc/hostname) ($agent)" 00:01:34.490 else 00:01:34.490 # Fallback 00:01:34.490 container=$agent 00:01:34.490 fi 00:01:34.490 fi 00:01:34.490 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:34.490 00:01:34.763 [Pipeline] } 00:01:34.781 [Pipeline] // withEnv 00:01:34.789 [Pipeline] setCustomBuildProperty 00:01:34.804 [Pipeline] stage 00:01:34.807 [Pipeline] { (Tests) 00:01:34.824 [Pipeline] sh 00:01:35.105 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.378 [Pipeline] sh 00:01:35.660 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.932 [Pipeline] timeout 00:01:35.932 Timeout set to expire in 1 hr 30 min 00:01:35.934 [Pipeline] { 00:01:35.947 [Pipeline] sh 00:01:36.226 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.794 HEAD is now at fc96810c2 bdev: remove bdev from examine allow list on unregister 00:01:36.807 [Pipeline] sh 00:01:37.091 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.365 [Pipeline] sh 00:01:37.653 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.928 [Pipeline] sh 00:01:38.209 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:38.469 ++ readlink -f spdk_repo 00:01:38.469 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.469 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.469 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.469 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.469 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.469 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.469 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.469 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:38.469 + cd /home/vagrant/spdk_repo 00:01:38.469 + source /etc/os-release 00:01:38.469 ++ NAME='Fedora Linux' 00:01:38.469 ++ VERSION='39 (Cloud Edition)' 00:01:38.469 ++ ID=fedora 00:01:38.469 ++ VERSION_ID=39 00:01:38.469 ++ VERSION_CODENAME= 00:01:38.469 ++ PLATFORM_ID=platform:f39 00:01:38.469 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:38.469 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.469 ++ LOGO=fedora-logo-icon 00:01:38.469 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:38.469 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.469 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:38.469 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.469 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.469 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.469 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:38.469 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.469 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:38.469 ++ SUPPORT_END=2024-11-12 00:01:38.469 ++ VARIANT='Cloud Edition' 00:01:38.469 ++ VARIANT_ID=cloud 00:01:38.469 + uname -a 00:01:38.469 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:38.469 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:39.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:39.037 Hugepages 00:01:39.037 node hugesize free / total 00:01:39.037 node0 1048576kB 0 / 0 00:01:39.037 node0 2048kB 0 / 0 00:01:39.037 00:01:39.037 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.037 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:39.037 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:39.037 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:39.037 + rm -f /tmp/spdk-ld-path 00:01:39.037 + source autorun-spdk.conf 00:01:39.037 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.037 ++ SPDK_RUN_ASAN=1 00:01:39.037 ++ SPDK_RUN_UBSAN=1 00:01:39.037 ++ SPDK_TEST_RAID=1 00:01:39.037 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.037 ++ RUN_NIGHTLY=0 00:01:39.037 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.037 + [[ -n '' ]] 00:01:39.037 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:39.037 + for M in /var/spdk/build-*-manifest.txt 00:01:39.037 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:39.037 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.037 + for M in /var/spdk/build-*-manifest.txt 00:01:39.037 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.037 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.037 + for M in /var/spdk/build-*-manifest.txt 00:01:39.037 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.037 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.037 ++ uname 00:01:39.037 + [[ Linux == \L\i\n\u\x ]] 00:01:39.037 + sudo dmesg -T 00:01:39.037 + sudo dmesg --clear 00:01:39.037 + dmesg_pid=5204 00:01:39.037 + [[ Fedora Linux == FreeBSD ]] 00:01:39.037 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.037 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.037 + sudo dmesg -Tw 00:01:39.037 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.037 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.037 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.037 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.037 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.037 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.037 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.037 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.037 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.037 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.037 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.037 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.037 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.297 09:55:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.297 09:55:53 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.297 09:55:53 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.297 09:55:53 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:39.297 09:55:53 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:39.297 09:55:53 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:39.297 09:55:53 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.297 09:55:53 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:39.297 09:55:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.297 09:55:53 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.297 09:55:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.297 09:55:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.297 09:55:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.298 09:55:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.298 09:55:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.298 09:55:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.298 09:55:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.298 09:55:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.298 09:55:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.298 09:55:53 -- paths/export.sh@5 -- $ export PATH 00:01:39.298 09:55:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.298 09:55:53 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:39.298 09:55:53 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:39.298 09:55:53 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732010153.XXXXXX 00:01:39.298 09:55:53 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732010153.ZUQN76 00:01:39.298 09:55:53 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:39.298 09:55:53 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:39.298 09:55:53 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:39.298 09:55:53 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.298 09:55:53 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.298 09:55:53 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:39.298 09:55:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.298 09:55:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.298 09:55:53 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:39.298 09:55:53 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:39.298 09:55:53 -- pm/common@17 -- $ local monitor 00:01:39.298 09:55:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.298 09:55:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.298 09:55:53 -- pm/common@25 -- $ sleep 1 00:01:39.298 09:55:53 -- pm/common@21 -- $ date +%s 00:01:39.298 09:55:53 -- pm/common@21 -- $ date +%s 00:01:39.298 09:55:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732010153 00:01:39.298 09:55:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732010153 00:01:39.298 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732010153_collect-cpu-load.pm.log 00:01:39.298 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732010153_collect-vmstat.pm.log 00:01:40.246 09:55:54 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:40.246 09:55:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.246 09:55:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.246 09:55:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:40.246 09:55:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.246 Tue Nov 19 09:55:54 AM UTC 2024 00:01:40.246 09:55:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.246 v25.01-pre-194-gfc96810c2 00:01:40.246 09:55:54 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:40.246 09:55:54 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:40.246 09:55:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.246 09:55:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.246 09:55:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.246 ************************************ 00:01:40.246 START TEST asan 00:01:40.246 ************************************ 00:01:40.246 using asan 00:01:40.246 09:55:54 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:40.246 00:01:40.246 real 0m0.000s 00:01:40.246 user 0m0.000s 00:01:40.246 sys 0m0.000s 00:01:40.246 09:55:54 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.246 09:55:54 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.246 ************************************ 00:01:40.246 END TEST asan 00:01:40.246 ************************************ 00:01:40.542 09:55:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.542 09:55:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.542 09:55:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.542 09:55:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.542 09:55:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.542 ************************************ 00:01:40.542 START TEST ubsan 00:01:40.542 ************************************ 00:01:40.542 using ubsan 00:01:40.542 09:55:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.542 00:01:40.542 real 0m0.000s 00:01:40.542 user 0m0.000s 00:01:40.542 sys 0m0.000s 00:01:40.542 09:55:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.542 09:55:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.542 ************************************ 00:01:40.542 END TEST ubsan 00:01:40.542 ************************************ 00:01:40.542 09:55:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.542 09:55:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.542 09:55:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.542 09:55:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.542 09:55:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.542 09:55:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.542 09:55:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.542 09:55:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.542 09:55:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:40.542 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:40.542 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:41.131 Using 'verbs' RDMA provider 00:01:56.954 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:09.161 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:09.988 Creating mk/config.mk...done. 00:02:09.988 Creating mk/cc.flags.mk...done. 00:02:09.988 Type 'make' to build. 00:02:09.988 09:56:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:09.988 09:56:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.988 09:56:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.988 09:56:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.988 ************************************ 00:02:09.988 START TEST make 00:02:09.988 ************************************ 00:02:09.988 09:56:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:10.247 make[1]: Nothing to be done for 'all'. 00:02:22.452 The Meson build system 00:02:22.452 Version: 1.5.0 00:02:22.452 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:22.452 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:22.452 Build type: native build 00:02:22.452 Program cat found: YES (/usr/bin/cat) 00:02:22.452 Project name: DPDK 00:02:22.452 Project version: 24.03.0 00:02:22.452 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.452 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.452 Host machine cpu family: x86_64 00:02:22.452 Host machine cpu: x86_64 00:02:22.452 Message: ## Building in Developer Mode ## 00:02:22.452 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.452 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.452 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.452 Program python3 found: YES (/usr/bin/python3) 00:02:22.452 Program cat found: YES (/usr/bin/cat) 00:02:22.452 Compiler for C supports arguments -march=native: YES 00:02:22.452 Checking for size of "void *" : 8 00:02:22.452 Checking for size of "void *" : 8 (cached) 00:02:22.452 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:22.452 Library m found: YES 00:02:22.452 Library numa found: YES 00:02:22.452 Has header "numaif.h" : YES 00:02:22.452 Library fdt found: NO 00:02:22.452 Library execinfo found: NO 00:02:22.452 Has header "execinfo.h" : YES 00:02:22.452 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.452 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.452 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.452 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.452 Run-time dependency openssl found: YES 3.1.1 00:02:22.452 Run-time dependency libpcap found: YES 1.10.4 00:02:22.453 Has header "pcap.h" with dependency libpcap: YES 00:02:22.453 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.453 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.453 Compiler for C supports arguments -Wformat: YES 00:02:22.453 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:22.453 Compiler for C supports arguments -Wformat-security: NO 00:02:22.453 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.453 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.453 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.453 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.453 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.453 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.453 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.453 Compiler for C supports arguments -Wundef: YES 00:02:22.453 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.453 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.453 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:22.453 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.453 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:22.453 Program objdump found: YES (/usr/bin/objdump) 00:02:22.453 Compiler for C supports arguments -mavx512f: YES 00:02:22.453 Checking if "AVX512 checking" compiles: YES 00:02:22.453 Fetching value of define "__SSE4_2__" : 1 00:02:22.453 Fetching value of define "__AES__" : 1 00:02:22.453 Fetching value of define "__AVX__" : 1 00:02:22.453 Fetching value of define "__AVX2__" : 1 00:02:22.453 Fetching value of define "__AVX512BW__" : (undefined) 00:02:22.453 Fetching value of define "__AVX512CD__" : (undefined) 00:02:22.453 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:22.453 Fetching value of define "__AVX512F__" : (undefined) 00:02:22.453 Fetching value of define "__AVX512VL__" : (undefined) 00:02:22.453 Fetching value of define "__PCLMUL__" : 1 00:02:22.453 Fetching value of define "__RDRND__" : 1 00:02:22.453 Fetching value of define "__RDSEED__" : 1 00:02:22.453 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.453 Fetching value of define "__znver1__" : (undefined) 00:02:22.453 Fetching value of define "__znver2__" : (undefined) 00:02:22.453 Fetching value of define "__znver3__" : (undefined) 00:02:22.453 Fetching value of define "__znver4__" : (undefined) 00:02:22.453 Library asan found: YES 00:02:22.453 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:22.453 Message: lib/log: Defining dependency "log" 00:02:22.453 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.453 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.453 Library rt found: YES 00:02:22.453 Checking for function "getentropy" : NO 00:02:22.453 Message: lib/eal: Defining dependency "eal" 00:02:22.453 Message: lib/ring: Defining dependency "ring" 00:02:22.453 Message: lib/rcu: Defining dependency "rcu" 00:02:22.453 Message: lib/mempool: Defining dependency "mempool" 00:02:22.453 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.453 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.453 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:22.453 Compiler for C supports arguments -mpclmul: YES 00:02:22.453 Compiler for C supports arguments -maes: YES 00:02:22.453 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.453 Compiler for C supports arguments -mavx512bw: YES 00:02:22.453 Compiler for C supports arguments -mavx512dq: YES 00:02:22.453 Compiler for C supports arguments -mavx512vl: YES 00:02:22.453 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.453 Compiler for C supports arguments -mavx2: YES 00:02:22.453 Compiler for C supports arguments -mavx: YES 00:02:22.453 Message: lib/net: Defining dependency "net" 00:02:22.453 Message: lib/meter: Defining dependency "meter" 00:02:22.453 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.453 Message: lib/pci: Defining dependency "pci" 00:02:22.453 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.453 Message: lib/hash: Defining dependency "hash" 00:02:22.453 Message: lib/timer: Defining dependency "timer" 00:02:22.453 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.453 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.453 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.453 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.453 Message: lib/power: Defining dependency "power" 00:02:22.453 Message: lib/reorder: Defining dependency "reorder" 00:02:22.453 Message: lib/security: Defining dependency "security" 00:02:22.453 Has header "linux/userfaultfd.h" : YES 00:02:22.453 Has header "linux/vduse.h" : YES 00:02:22.453 Message: lib/vhost: Defining dependency "vhost" 00:02:22.453 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:22.453 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.453 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.453 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.453 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.453 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.453 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.453 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.453 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.453 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.453 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.453 Configuring doxy-api-html.conf using configuration 00:02:22.453 Configuring doxy-api-man.conf using configuration 00:02:22.453 Program mandb found: YES (/usr/bin/mandb) 00:02:22.453 Program sphinx-build found: NO 00:02:22.453 Configuring rte_build_config.h using configuration 00:02:22.453 Message: 00:02:22.453 ================= 00:02:22.453 Applications Enabled 00:02:22.453 ================= 00:02:22.453 00:02:22.453 apps: 00:02:22.453 00:02:22.453 00:02:22.453 Message: 00:02:22.453 ================= 00:02:22.453 Libraries Enabled 00:02:22.453 ================= 00:02:22.453 00:02:22.453 libs: 00:02:22.453 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.453 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.453 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.453 00:02:22.453 Message: 00:02:22.453 =============== 00:02:22.453 Drivers Enabled 00:02:22.453 =============== 00:02:22.453 00:02:22.453 common: 00:02:22.453 00:02:22.453 bus: 00:02:22.453 pci, vdev, 00:02:22.453 mempool: 00:02:22.453 ring, 00:02:22.453 dma: 00:02:22.453 00:02:22.453 net: 00:02:22.453 00:02:22.453 crypto: 00:02:22.453 00:02:22.453 compress: 00:02:22.453 00:02:22.453 vdpa: 00:02:22.453 00:02:22.453 00:02:22.453 Message: 00:02:22.453 ================= 00:02:22.453 Content Skipped 00:02:22.453 ================= 00:02:22.453 00:02:22.453 apps: 00:02:22.453 dumpcap: explicitly disabled via build config 00:02:22.453 graph: explicitly disabled via build config 00:02:22.453 pdump: explicitly disabled via build config 00:02:22.453 proc-info: explicitly disabled via build config 00:02:22.453 test-acl: explicitly disabled via build config 00:02:22.453 test-bbdev: explicitly disabled via build config 00:02:22.453 test-cmdline: explicitly disabled via build config 00:02:22.453 test-compress-perf: explicitly disabled via build config 00:02:22.453 test-crypto-perf: explicitly disabled via build config 00:02:22.453 test-dma-perf: explicitly disabled via build config 00:02:22.453 test-eventdev: explicitly disabled via build config 00:02:22.453 test-fib: explicitly disabled via build config 00:02:22.453 test-flow-perf: explicitly disabled via build config 00:02:22.453 test-gpudev: explicitly disabled via build config 00:02:22.453 test-mldev: explicitly disabled via build config 00:02:22.453 test-pipeline: explicitly disabled via build config 00:02:22.453 test-pmd: explicitly disabled via build config 00:02:22.453 test-regex: explicitly disabled via build config 00:02:22.453 test-sad: explicitly disabled via build config 00:02:22.453 test-security-perf: explicitly disabled via build config 00:02:22.453 00:02:22.453 libs: 00:02:22.453 argparse: explicitly disabled via build config 00:02:22.453 metrics: explicitly disabled via build config 00:02:22.453 acl: explicitly disabled via build config 00:02:22.453 bbdev: explicitly disabled via build config 00:02:22.453 bitratestats: explicitly disabled via build config 00:02:22.453 bpf: explicitly disabled via build config 00:02:22.453 cfgfile: explicitly disabled via build config 00:02:22.453 distributor: explicitly disabled via build config 00:02:22.453 efd: explicitly disabled via build config 00:02:22.453 eventdev: explicitly disabled via build config 00:02:22.453 dispatcher: explicitly disabled via build config 00:02:22.453 gpudev: explicitly disabled via build config 00:02:22.453 gro: explicitly disabled via build config 00:02:22.453 gso: explicitly disabled via build config 00:02:22.453 ip_frag: explicitly disabled via build config 00:02:22.453 jobstats: explicitly disabled via build config 00:02:22.453 latencystats: explicitly disabled via build config 00:02:22.453 lpm: explicitly disabled via build config 00:02:22.453 member: explicitly disabled via build config 00:02:22.453 pcapng: explicitly disabled via build config 00:02:22.453 rawdev: explicitly disabled via build config 00:02:22.453 regexdev: explicitly disabled via build config 00:02:22.453 mldev: explicitly disabled via build config 00:02:22.453 rib: explicitly disabled via build config 00:02:22.453 sched: explicitly disabled via build config 00:02:22.453 stack: explicitly disabled via build config 00:02:22.453 ipsec: explicitly disabled via build config 00:02:22.453 pdcp: explicitly disabled via build config 00:02:22.453 fib: explicitly disabled via build config 00:02:22.453 port: explicitly disabled via build config 00:02:22.453 pdump: explicitly disabled via build config 00:02:22.453 table: explicitly disabled via build config 00:02:22.453 pipeline: explicitly disabled via build config 00:02:22.453 graph: explicitly disabled via build config 00:02:22.454 node: explicitly disabled via build config 00:02:22.454 00:02:22.454 drivers: 00:02:22.454 common/cpt: not in enabled drivers build config 00:02:22.454 common/dpaax: not in enabled drivers build config 00:02:22.454 common/iavf: not in enabled drivers build config 00:02:22.454 common/idpf: not in enabled drivers build config 00:02:22.454 common/ionic: not in enabled drivers build config 00:02:22.454 common/mvep: not in enabled drivers build config 00:02:22.454 common/octeontx: not in enabled drivers build config 00:02:22.454 bus/auxiliary: not in enabled drivers build config 00:02:22.454 bus/cdx: not in enabled drivers build config 00:02:22.454 bus/dpaa: not in enabled drivers build config 00:02:22.454 bus/fslmc: not in enabled drivers build config 00:02:22.454 bus/ifpga: not in enabled drivers build config 00:02:22.454 bus/platform: not in enabled drivers build config 00:02:22.454 bus/uacce: not in enabled drivers build config 00:02:22.454 bus/vmbus: not in enabled drivers build config 00:02:22.454 common/cnxk: not in enabled drivers build config 00:02:22.454 common/mlx5: not in enabled drivers build config 00:02:22.454 common/nfp: not in enabled drivers build config 00:02:22.454 common/nitrox: not in enabled drivers build config 00:02:22.454 common/qat: not in enabled drivers build config 00:02:22.454 common/sfc_efx: not in enabled drivers build config 00:02:22.454 mempool/bucket: not in enabled drivers build config 00:02:22.454 mempool/cnxk: not in enabled drivers build config 00:02:22.454 mempool/dpaa: not in enabled drivers build config 00:02:22.454 mempool/dpaa2: not in enabled drivers build config 00:02:22.454 mempool/octeontx: not in enabled drivers build config 00:02:22.454 mempool/stack: not in enabled drivers build config 00:02:22.454 dma/cnxk: not in enabled drivers build config 00:02:22.454 dma/dpaa: not in enabled drivers build config 00:02:22.454 dma/dpaa2: not in enabled drivers build config 00:02:22.454 dma/hisilicon: not in enabled drivers build config 00:02:22.454 dma/idxd: not in enabled drivers build config 00:02:22.454 dma/ioat: not in enabled drivers build config 00:02:22.454 dma/skeleton: not in enabled drivers build config 00:02:22.454 net/af_packet: not in enabled drivers build config 00:02:22.454 net/af_xdp: not in enabled drivers build config 00:02:22.454 net/ark: not in enabled drivers build config 00:02:22.454 net/atlantic: not in enabled drivers build config 00:02:22.454 net/avp: not in enabled drivers build config 00:02:22.454 net/axgbe: not in enabled drivers build config 00:02:22.454 net/bnx2x: not in enabled drivers build config 00:02:22.454 net/bnxt: not in enabled drivers build config 00:02:22.454 net/bonding: not in enabled drivers build config 00:02:22.454 net/cnxk: not in enabled drivers build config 00:02:22.454 net/cpfl: not in enabled drivers build config 00:02:22.454 net/cxgbe: not in enabled drivers build config 00:02:22.454 net/dpaa: not in enabled drivers build config 00:02:22.454 net/dpaa2: not in enabled drivers build config 00:02:22.454 net/e1000: not in enabled drivers build config 00:02:22.454 net/ena: not in enabled drivers build config 00:02:22.454 net/enetc: not in enabled drivers build config 00:02:22.454 net/enetfec: not in enabled drivers build config 00:02:22.454 net/enic: not in enabled drivers build config 00:02:22.454 net/failsafe: not in enabled drivers build config 00:02:22.454 net/fm10k: not in enabled drivers build config 00:02:22.454 net/gve: not in enabled drivers build config 00:02:22.454 net/hinic: not in enabled drivers build config 00:02:22.454 net/hns3: not in enabled drivers build config 00:02:22.454 net/i40e: not in enabled drivers build config 00:02:22.454 net/iavf: not in enabled drivers build config 00:02:22.454 net/ice: not in enabled drivers build config 00:02:22.454 net/idpf: not in enabled drivers build config 00:02:22.454 net/igc: not in enabled drivers build config 00:02:22.454 net/ionic: not in enabled drivers build config 00:02:22.454 net/ipn3ke: not in enabled drivers build config 00:02:22.454 net/ixgbe: not in enabled drivers build config 00:02:22.454 net/mana: not in enabled drivers build config 00:02:22.454 net/memif: not in enabled drivers build config 00:02:22.454 net/mlx4: not in enabled drivers build config 00:02:22.454 net/mlx5: not in enabled drivers build config 00:02:22.454 net/mvneta: not in enabled drivers build config 00:02:22.454 net/mvpp2: not in enabled drivers build config 00:02:22.454 net/netvsc: not in enabled drivers build config 00:02:22.454 net/nfb: not in enabled drivers build config 00:02:22.454 net/nfp: not in enabled drivers build config 00:02:22.454 net/ngbe: not in enabled drivers build config 00:02:22.454 net/null: not in enabled drivers build config 00:02:22.454 net/octeontx: not in enabled drivers build config 00:02:22.454 net/octeon_ep: not in enabled drivers build config 00:02:22.454 net/pcap: not in enabled drivers build config 00:02:22.454 net/pfe: not in enabled drivers build config 00:02:22.454 net/qede: not in enabled drivers build config 00:02:22.454 net/ring: not in enabled drivers build config 00:02:22.454 net/sfc: not in enabled drivers build config 00:02:22.454 net/softnic: not in enabled drivers build config 00:02:22.454 net/tap: not in enabled drivers build config 00:02:22.454 net/thunderx: not in enabled drivers build config 00:02:22.454 net/txgbe: not in enabled drivers build config 00:02:22.454 net/vdev_netvsc: not in enabled drivers build config 00:02:22.454 net/vhost: not in enabled drivers build config 00:02:22.454 net/virtio: not in enabled drivers build config 00:02:22.454 net/vmxnet3: not in enabled drivers build config 00:02:22.454 raw/*: missing internal dependency, "rawdev" 00:02:22.454 crypto/armv8: not in enabled drivers build config 00:02:22.454 crypto/bcmfs: not in enabled drivers build config 00:02:22.454 crypto/caam_jr: not in enabled drivers build config 00:02:22.454 crypto/ccp: not in enabled drivers build config 00:02:22.454 crypto/cnxk: not in enabled drivers build config 00:02:22.454 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.454 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.454 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.454 crypto/mlx5: not in enabled drivers build config 00:02:22.454 crypto/mvsam: not in enabled drivers build config 00:02:22.454 crypto/nitrox: not in enabled drivers build config 00:02:22.454 crypto/null: not in enabled drivers build config 00:02:22.454 crypto/octeontx: not in enabled drivers build config 00:02:22.454 crypto/openssl: not in enabled drivers build config 00:02:22.454 crypto/scheduler: not in enabled drivers build config 00:02:22.454 crypto/uadk: not in enabled drivers build config 00:02:22.454 crypto/virtio: not in enabled drivers build config 00:02:22.454 compress/isal: not in enabled drivers build config 00:02:22.454 compress/mlx5: not in enabled drivers build config 00:02:22.454 compress/nitrox: not in enabled drivers build config 00:02:22.454 compress/octeontx: not in enabled drivers build config 00:02:22.454 compress/zlib: not in enabled drivers build config 00:02:22.454 regex/*: missing internal dependency, "regexdev" 00:02:22.454 ml/*: missing internal dependency, "mldev" 00:02:22.454 vdpa/ifc: not in enabled drivers build config 00:02:22.454 vdpa/mlx5: not in enabled drivers build config 00:02:22.454 vdpa/nfp: not in enabled drivers build config 00:02:22.454 vdpa/sfc: not in enabled drivers build config 00:02:22.454 event/*: missing internal dependency, "eventdev" 00:02:22.454 baseband/*: missing internal dependency, "bbdev" 00:02:22.454 gpu/*: missing internal dependency, "gpudev" 00:02:22.454 00:02:22.454 00:02:22.454 Build targets in project: 85 00:02:22.454 00:02:22.454 DPDK 24.03.0 00:02:22.454 00:02:22.454 User defined options 00:02:22.454 buildtype : debug 00:02:22.454 default_library : shared 00:02:22.454 libdir : lib 00:02:22.454 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.454 b_sanitize : address 00:02:22.454 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:22.454 c_link_args : 00:02:22.454 cpu_instruction_set: native 00:02:22.454 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:22.454 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:22.454 enable_docs : false 00:02:22.454 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:22.454 enable_kmods : false 00:02:22.454 max_lcores : 128 00:02:22.454 tests : false 00:02:22.454 00:02:22.454 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.454 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:22.454 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.454 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.454 [3/268] Linking static target lib/librte_kvargs.a 00:02:22.454 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:22.454 [5/268] Linking static target lib/librte_log.a 00:02:22.454 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.713 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.713 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.713 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.971 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.971 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.971 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.971 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.971 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.971 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.230 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.230 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.230 [18/268] Linking target lib/librte_log.so.24.1 00:02:23.230 [19/268] Linking static target lib/librte_telemetry.a 00:02:23.230 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.489 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:23.489 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:23.748 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.748 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:23.748 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:23.748 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.748 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.008 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.008 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.008 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.008 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.008 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.008 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.008 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.268 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.527 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.527 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.527 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.527 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.786 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.786 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.786 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:24.786 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:24.786 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.786 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.045 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.045 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.045 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.304 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.575 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.575 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.575 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.862 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.862 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:25.862 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:25.862 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:25.862 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.120 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.120 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.120 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.378 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.378 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.378 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.378 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.378 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.638 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.638 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.638 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.638 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.897 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.897 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.897 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.156 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.156 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.156 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.156 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.156 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.156 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.415 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.415 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.415 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.673 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.673 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.673 [84/268] Linking static target lib/librte_ring.a 00:02:27.673 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.931 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.931 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.931 [88/268] Linking static target lib/librte_eal.a 00:02:28.190 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.190 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.190 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.190 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.190 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.190 [94/268] Linking static target lib/librte_mempool.a 00:02:28.190 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.190 [96/268] Linking static target lib/librte_rcu.a 00:02:28.449 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.708 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:28.708 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:28.708 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.708 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.708 [102/268] Linking static target lib/librte_mbuf.a 00:02:28.708 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.967 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.967 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.227 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:29.227 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.227 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.227 [109/268] Linking static target lib/librte_net.a 00:02:29.486 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.486 [111/268] Linking static target lib/librte_meter.a 00:02:29.486 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.486 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:29.486 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:29.745 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:29.745 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.005 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.005 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.005 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.265 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.524 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.524 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.782 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.782 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.041 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.041 [126/268] Linking static target lib/librte_pci.a 00:02:31.041 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.041 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.300 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.300 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.300 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.300 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.300 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.300 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.300 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.559 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.559 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:31.559 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.559 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.559 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.559 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.559 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.559 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.559 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:31.818 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.818 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.818 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.818 [148/268] Linking static target lib/librte_cmdline.a 00:02:32.387 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.387 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.387 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.387 [152/268] Linking static target lib/librte_timer.a 00:02:32.647 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:32.647 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.907 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.907 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.907 [157/268] Linking static target lib/librte_ethdev.a 00:02:32.907 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.907 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.165 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.165 [161/268] Linking static target lib/librte_hash.a 00:02:33.165 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:33.165 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.165 [164/268] Linking static target lib/librte_compressdev.a 00:02:33.165 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.735 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:33.735 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.735 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.735 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.735 [170/268] Linking static target lib/librte_dmadev.a 00:02:33.735 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.995 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.995 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.255 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.255 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.255 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.514 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.514 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.514 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.514 [180/268] Linking static target lib/librte_cryptodev.a 00:02:34.514 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.774 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.774 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.774 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.342 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.342 [186/268] Linking static target lib/librte_reorder.a 00:02:35.342 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.342 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.342 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.342 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.601 [191/268] Linking static target lib/librte_power.a 00:02:35.601 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.601 [193/268] Linking static target lib/librte_security.a 00:02:35.861 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.861 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.430 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.689 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.689 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.689 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.689 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:36.689 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.948 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.208 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.208 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.468 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.468 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.468 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.468 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.468 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.727 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.727 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.727 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.986 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.986 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.986 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:37.986 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.986 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.986 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.986 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:38.245 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.246 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.246 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.505 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.505 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.505 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:38.505 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.505 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.075 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.645 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [230/268] Linking target lib/librte_eal.so.24.1 00:02:39.905 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:39.905 [232/268] Linking target lib/librte_timer.so.24.1 00:02:39.905 [233/268] Linking target lib/librte_meter.so.24.1 00:02:39.905 [234/268] Linking target lib/librte_pci.so.24.1 00:02:39.905 [235/268] Linking target lib/librte_ring.so.24.1 00:02:39.905 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:39.905 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:40.164 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:40.164 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:40.164 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:40.164 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:40.164 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:40.164 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.164 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:40.164 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:40.423 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.423 [247/268] Linking target lib/librte_mbuf.so.24.1 00:02:40.423 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:40.423 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:40.423 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:40.683 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:40.683 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:02:40.683 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:40.683 [254/268] Linking target lib/librte_net.so.24.1 00:02:40.683 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:40.683 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:40.683 [257/268] Linking target lib/librte_security.so.24.1 00:02:40.683 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:40.683 [259/268] Linking target lib/librte_hash.so.24.1 00:02:40.942 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.942 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.202 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:41.202 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:41.462 [264/268] Linking target lib/librte_power.so.24.1 00:02:42.841 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.841 [266/268] Linking static target lib/librte_vhost.a 00:02:44.220 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.479 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:44.479 INFO: autodetecting backend as ninja 00:02:44.479 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.597 CC lib/ut/ut.o 00:03:02.597 CC lib/log/log.o 00:03:02.597 CC lib/ut_mock/mock.o 00:03:02.597 CC lib/log/log_flags.o 00:03:02.597 CC lib/log/log_deprecated.o 00:03:02.597 LIB libspdk_ut.a 00:03:02.857 SO libspdk_ut.so.2.0 00:03:02.857 LIB libspdk_log.a 00:03:02.857 LIB libspdk_ut_mock.a 00:03:02.857 SO libspdk_log.so.7.1 00:03:02.857 SO libspdk_ut_mock.so.6.0 00:03:02.857 SYMLINK libspdk_ut.so 00:03:02.857 SYMLINK libspdk_ut_mock.so 00:03:02.857 SYMLINK libspdk_log.so 00:03:03.116 CXX lib/trace_parser/trace.o 00:03:03.116 CC lib/ioat/ioat.o 00:03:03.116 CC lib/util/base64.o 00:03:03.116 CC lib/util/bit_array.o 00:03:03.116 CC lib/util/cpuset.o 00:03:03.116 CC lib/util/crc16.o 00:03:03.116 CC lib/util/crc32.o 00:03:03.116 CC lib/util/crc32c.o 00:03:03.116 CC lib/dma/dma.o 00:03:03.116 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.374 CC lib/util/crc32_ieee.o 00:03:03.374 CC lib/util/crc64.o 00:03:03.374 CC lib/vfio_user/host/vfio_user.o 00:03:03.374 CC lib/util/dif.o 00:03:03.374 CC lib/util/fd.o 00:03:03.374 LIB libspdk_dma.a 00:03:03.374 CC lib/util/fd_group.o 00:03:03.374 SO libspdk_dma.so.5.0 00:03:03.374 CC lib/util/file.o 00:03:03.374 CC lib/util/hexlify.o 00:03:03.633 LIB libspdk_ioat.a 00:03:03.633 SYMLINK libspdk_dma.so 00:03:03.633 CC lib/util/iov.o 00:03:03.633 CC lib/util/math.o 00:03:03.633 SO libspdk_ioat.so.7.0 00:03:03.633 LIB libspdk_vfio_user.a 00:03:03.633 CC lib/util/net.o 00:03:03.633 SYMLINK libspdk_ioat.so 00:03:03.633 CC lib/util/pipe.o 00:03:03.633 SO libspdk_vfio_user.so.5.0 00:03:03.633 CC lib/util/strerror_tls.o 00:03:03.633 CC lib/util/string.o 00:03:03.633 SYMLINK libspdk_vfio_user.so 00:03:03.633 CC lib/util/uuid.o 00:03:03.633 CC lib/util/xor.o 00:03:03.633 CC lib/util/zipf.o 00:03:03.633 CC lib/util/md5.o 00:03:04.200 LIB libspdk_util.a 00:03:04.201 SO libspdk_util.so.10.1 00:03:04.201 LIB libspdk_trace_parser.a 00:03:04.201 SO libspdk_trace_parser.so.6.0 00:03:04.459 SYMLINK libspdk_util.so 00:03:04.459 SYMLINK libspdk_trace_parser.so 00:03:04.459 CC lib/conf/conf.o 00:03:04.459 CC lib/rdma_utils/rdma_utils.o 00:03:04.459 CC lib/env_dpdk/env.o 00:03:04.459 CC lib/env_dpdk/memory.o 00:03:04.459 CC lib/vmd/vmd.o 00:03:04.459 CC lib/json/json_parse.o 00:03:04.459 CC lib/vmd/led.o 00:03:04.459 CC lib/idxd/idxd.o 00:03:04.459 CC lib/env_dpdk/pci.o 00:03:04.459 CC lib/json/json_util.o 00:03:04.718 CC lib/env_dpdk/init.o 00:03:04.718 CC lib/env_dpdk/threads.o 00:03:04.718 CC lib/json/json_write.o 00:03:04.718 LIB libspdk_rdma_utils.a 00:03:04.976 SO libspdk_rdma_utils.so.1.0 00:03:04.976 LIB libspdk_conf.a 00:03:04.976 SO libspdk_conf.so.6.0 00:03:04.976 SYMLINK libspdk_rdma_utils.so 00:03:04.976 CC lib/env_dpdk/pci_ioat.o 00:03:04.976 SYMLINK libspdk_conf.so 00:03:04.976 CC lib/env_dpdk/pci_virtio.o 00:03:04.976 CC lib/env_dpdk/pci_vmd.o 00:03:04.976 CC lib/env_dpdk/pci_idxd.o 00:03:04.976 CC lib/idxd/idxd_user.o 00:03:05.234 CC lib/idxd/idxd_kernel.o 00:03:05.234 CC lib/rdma_provider/common.o 00:03:05.234 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:05.234 LIB libspdk_json.a 00:03:05.234 SO libspdk_json.so.6.0 00:03:05.234 CC lib/env_dpdk/pci_event.o 00:03:05.234 CC lib/env_dpdk/sigbus_handler.o 00:03:05.234 SYMLINK libspdk_json.so 00:03:05.234 CC lib/env_dpdk/pci_dpdk.o 00:03:05.234 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:05.234 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:05.492 LIB libspdk_idxd.a 00:03:05.492 SO libspdk_idxd.so.12.1 00:03:05.492 LIB libspdk_vmd.a 00:03:05.492 LIB libspdk_rdma_provider.a 00:03:05.492 SO libspdk_vmd.so.6.0 00:03:05.492 SYMLINK libspdk_idxd.so 00:03:05.492 SO libspdk_rdma_provider.so.7.0 00:03:05.492 SYMLINK libspdk_vmd.so 00:03:05.493 SYMLINK libspdk_rdma_provider.so 00:03:05.493 CC lib/jsonrpc/jsonrpc_server.o 00:03:05.493 CC lib/jsonrpc/jsonrpc_client.o 00:03:05.493 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:05.493 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.061 LIB libspdk_jsonrpc.a 00:03:06.061 SO libspdk_jsonrpc.so.6.0 00:03:06.061 SYMLINK libspdk_jsonrpc.so 00:03:06.320 CC lib/rpc/rpc.o 00:03:06.320 LIB libspdk_env_dpdk.a 00:03:06.579 SO libspdk_env_dpdk.so.15.1 00:03:06.579 LIB libspdk_rpc.a 00:03:06.579 SO libspdk_rpc.so.6.0 00:03:06.579 SYMLINK libspdk_env_dpdk.so 00:03:06.579 SYMLINK libspdk_rpc.so 00:03:06.839 CC lib/keyring/keyring.o 00:03:06.839 CC lib/keyring/keyring_rpc.o 00:03:06.839 CC lib/trace/trace_flags.o 00:03:06.839 CC lib/trace/trace.o 00:03:06.839 CC lib/trace/trace_rpc.o 00:03:06.839 CC lib/notify/notify.o 00:03:06.839 CC lib/notify/notify_rpc.o 00:03:07.099 LIB libspdk_notify.a 00:03:07.099 SO libspdk_notify.so.6.0 00:03:07.099 LIB libspdk_keyring.a 00:03:07.358 SYMLINK libspdk_notify.so 00:03:07.358 SO libspdk_keyring.so.2.0 00:03:07.358 LIB libspdk_trace.a 00:03:07.358 SO libspdk_trace.so.11.0 00:03:07.358 SYMLINK libspdk_keyring.so 00:03:07.358 SYMLINK libspdk_trace.so 00:03:07.617 CC lib/thread/thread.o 00:03:07.617 CC lib/thread/iobuf.o 00:03:07.617 CC lib/sock/sock.o 00:03:07.617 CC lib/sock/sock_rpc.o 00:03:08.186 LIB libspdk_sock.a 00:03:08.445 SO libspdk_sock.so.10.0 00:03:08.445 SYMLINK libspdk_sock.so 00:03:08.704 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.704 CC lib/nvme/nvme_ctrlr.o 00:03:08.704 CC lib/nvme/nvme_fabric.o 00:03:08.704 CC lib/nvme/nvme_ns_cmd.o 00:03:08.704 CC lib/nvme/nvme_pcie_common.o 00:03:08.704 CC lib/nvme/nvme_ns.o 00:03:08.704 CC lib/nvme/nvme_qpair.o 00:03:08.704 CC lib/nvme/nvme_pcie.o 00:03:08.704 CC lib/nvme/nvme.o 00:03:09.642 LIB libspdk_thread.a 00:03:09.642 CC lib/nvme/nvme_quirks.o 00:03:09.642 CC lib/nvme/nvme_transport.o 00:03:09.642 SO libspdk_thread.so.11.0 00:03:09.642 CC lib/nvme/nvme_discovery.o 00:03:09.642 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.642 SYMLINK libspdk_thread.so 00:03:09.642 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.901 CC lib/nvme/nvme_tcp.o 00:03:09.901 CC lib/nvme/nvme_opal.o 00:03:09.901 CC lib/nvme/nvme_io_msg.o 00:03:10.160 CC lib/nvme/nvme_poll_group.o 00:03:10.160 CC lib/nvme/nvme_zns.o 00:03:10.420 CC lib/accel/accel.o 00:03:10.420 CC lib/blob/blobstore.o 00:03:10.420 CC lib/nvme/nvme_stubs.o 00:03:10.420 CC lib/init/json_config.o 00:03:10.420 CC lib/blob/request.o 00:03:10.680 CC lib/init/subsystem.o 00:03:10.939 CC lib/blob/zeroes.o 00:03:10.939 CC lib/init/subsystem_rpc.o 00:03:10.939 CC lib/blob/blob_bs_dev.o 00:03:10.939 CC lib/accel/accel_rpc.o 00:03:10.939 CC lib/accel/accel_sw.o 00:03:10.939 CC lib/init/rpc.o 00:03:11.198 CC lib/nvme/nvme_auth.o 00:03:11.198 CC lib/nvme/nvme_cuse.o 00:03:11.198 CC lib/nvme/nvme_rdma.o 00:03:11.198 LIB libspdk_init.a 00:03:11.198 CC lib/virtio/virtio.o 00:03:11.198 SO libspdk_init.so.6.0 00:03:11.198 SYMLINK libspdk_init.so 00:03:11.457 CC lib/fsdev/fsdev.o 00:03:11.457 CC lib/event/app.o 00:03:11.716 CC lib/fsdev/fsdev_io.o 00:03:11.716 CC lib/virtio/virtio_vhost_user.o 00:03:11.716 CC lib/virtio/virtio_vfio_user.o 00:03:11.716 LIB libspdk_accel.a 00:03:11.975 SO libspdk_accel.so.16.0 00:03:11.975 CC lib/fsdev/fsdev_rpc.o 00:03:11.975 SYMLINK libspdk_accel.so 00:03:11.975 CC lib/virtio/virtio_pci.o 00:03:11.975 CC lib/event/reactor.o 00:03:12.234 CC lib/event/log_rpc.o 00:03:12.235 CC lib/event/app_rpc.o 00:03:12.235 CC lib/event/scheduler_static.o 00:03:12.235 LIB libspdk_fsdev.a 00:03:12.235 CC lib/bdev/bdev.o 00:03:12.235 CC lib/bdev/bdev_rpc.o 00:03:12.235 SO libspdk_fsdev.so.2.0 00:03:12.235 CC lib/bdev/bdev_zone.o 00:03:12.235 CC lib/bdev/part.o 00:03:12.494 SYMLINK libspdk_fsdev.so 00:03:12.494 CC lib/bdev/scsi_nvme.o 00:03:12.494 LIB libspdk_virtio.a 00:03:12.494 SO libspdk_virtio.so.7.0 00:03:12.494 LIB libspdk_event.a 00:03:12.494 SYMLINK libspdk_virtio.so 00:03:12.494 SO libspdk_event.so.14.0 00:03:12.494 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:12.754 SYMLINK libspdk_event.so 00:03:12.754 LIB libspdk_nvme.a 00:03:13.013 SO libspdk_nvme.so.15.0 00:03:13.271 LIB libspdk_fuse_dispatcher.a 00:03:13.271 SO libspdk_fuse_dispatcher.so.1.0 00:03:13.541 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.541 SYMLINK libspdk_nvme.so 00:03:14.928 LIB libspdk_blob.a 00:03:14.928 SO libspdk_blob.so.11.0 00:03:14.928 SYMLINK libspdk_blob.so 00:03:15.194 CC lib/lvol/lvol.o 00:03:15.194 CC lib/blobfs/blobfs.o 00:03:15.194 CC lib/blobfs/tree.o 00:03:16.131 LIB libspdk_bdev.a 00:03:16.131 SO libspdk_bdev.so.17.0 00:03:16.131 SYMLINK libspdk_bdev.so 00:03:16.390 CC lib/ublk/ublk.o 00:03:16.390 CC lib/ublk/ublk_rpc.o 00:03:16.390 CC lib/nvmf/ctrlr.o 00:03:16.390 CC lib/nvmf/ctrlr_discovery.o 00:03:16.390 CC lib/nvmf/ctrlr_bdev.o 00:03:16.390 CC lib/nbd/nbd.o 00:03:16.390 CC lib/scsi/dev.o 00:03:16.390 CC lib/ftl/ftl_core.o 00:03:16.390 LIB libspdk_blobfs.a 00:03:16.390 LIB libspdk_lvol.a 00:03:16.391 SO libspdk_blobfs.so.10.0 00:03:16.391 SO libspdk_lvol.so.10.0 00:03:16.649 SYMLINK libspdk_blobfs.so 00:03:16.649 CC lib/scsi/lun.o 00:03:16.649 SYMLINK libspdk_lvol.so 00:03:16.649 CC lib/scsi/port.o 00:03:16.649 CC lib/scsi/scsi.o 00:03:16.649 CC lib/scsi/scsi_bdev.o 00:03:16.650 CC lib/scsi/scsi_pr.o 00:03:16.650 CC lib/scsi/scsi_rpc.o 00:03:16.909 CC lib/ftl/ftl_init.o 00:03:16.909 CC lib/scsi/task.o 00:03:16.909 CC lib/nbd/nbd_rpc.o 00:03:16.909 CC lib/ftl/ftl_layout.o 00:03:16.909 CC lib/ftl/ftl_debug.o 00:03:17.168 CC lib/ftl/ftl_io.o 00:03:17.168 CC lib/ftl/ftl_sb.o 00:03:17.168 LIB libspdk_nbd.a 00:03:17.168 LIB libspdk_ublk.a 00:03:17.168 LIB libspdk_scsi.a 00:03:17.168 CC lib/nvmf/subsystem.o 00:03:17.168 SO libspdk_nbd.so.7.0 00:03:17.168 SO libspdk_ublk.so.3.0 00:03:17.168 CC lib/ftl/ftl_l2p.o 00:03:17.168 SO libspdk_scsi.so.9.0 00:03:17.168 SYMLINK libspdk_nbd.so 00:03:17.168 CC lib/ftl/ftl_l2p_flat.o 00:03:17.168 SYMLINK libspdk_ublk.so 00:03:17.168 CC lib/nvmf/nvmf.o 00:03:17.427 CC lib/nvmf/nvmf_rpc.o 00:03:17.427 CC lib/nvmf/transport.o 00:03:17.427 CC lib/nvmf/tcp.o 00:03:17.427 SYMLINK libspdk_scsi.so 00:03:17.427 CC lib/ftl/ftl_nv_cache.o 00:03:17.427 CC lib/ftl/ftl_band.o 00:03:17.427 CC lib/ftl/ftl_band_ops.o 00:03:17.427 CC lib/ftl/ftl_writer.o 00:03:17.686 CC lib/ftl/ftl_rq.o 00:03:17.945 CC lib/ftl/ftl_reloc.o 00:03:17.945 CC lib/ftl/ftl_l2p_cache.o 00:03:17.945 CC lib/ftl/ftl_p2l.o 00:03:18.205 CC lib/ftl/ftl_p2l_log.o 00:03:18.464 CC lib/iscsi/conn.o 00:03:18.464 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.464 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.464 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.464 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.724 CC lib/iscsi/init_grp.o 00:03:18.724 CC lib/iscsi/iscsi.o 00:03:18.724 CC lib/iscsi/param.o 00:03:18.724 CC lib/iscsi/portal_grp.o 00:03:18.724 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.724 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.724 CC lib/iscsi/tgt_node.o 00:03:18.724 CC lib/vhost/vhost.o 00:03:18.983 CC lib/iscsi/iscsi_subsystem.o 00:03:18.983 CC lib/nvmf/stubs.o 00:03:18.983 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.983 CC lib/vhost/vhost_rpc.o 00:03:18.983 CC lib/vhost/vhost_scsi.o 00:03:19.243 CC lib/vhost/vhost_blk.o 00:03:19.243 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:19.243 CC lib/nvmf/mdns_server.o 00:03:19.502 CC lib/nvmf/rdma.o 00:03:19.502 CC lib/nvmf/auth.o 00:03:19.502 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:19.502 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.761 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.761 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.761 CC lib/vhost/rte_vhost_user.o 00:03:19.762 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:20.021 CC lib/ftl/utils/ftl_conf.o 00:03:20.021 CC lib/iscsi/iscsi_rpc.o 00:03:20.021 CC lib/ftl/utils/ftl_md.o 00:03:20.021 CC lib/iscsi/task.o 00:03:20.280 CC lib/ftl/utils/ftl_mempool.o 00:03:20.280 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.280 CC lib/ftl/utils/ftl_property.o 00:03:20.280 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:20.280 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:20.539 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:20.539 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:20.539 LIB libspdk_iscsi.a 00:03:20.539 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:20.539 SO libspdk_iscsi.so.8.0 00:03:20.539 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:20.539 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:20.539 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:20.539 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:20.539 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:20.539 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:20.799 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:20.799 SYMLINK libspdk_iscsi.so 00:03:20.799 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:20.799 CC lib/ftl/base/ftl_base_dev.o 00:03:20.799 CC lib/ftl/base/ftl_base_bdev.o 00:03:20.799 CC lib/ftl/ftl_trace.o 00:03:21.058 LIB libspdk_vhost.a 00:03:21.058 LIB libspdk_ftl.a 00:03:21.058 SO libspdk_vhost.so.8.0 00:03:21.317 SYMLINK libspdk_vhost.so 00:03:21.317 SO libspdk_ftl.so.9.0 00:03:21.577 SYMLINK libspdk_ftl.so 00:03:22.513 LIB libspdk_nvmf.a 00:03:22.772 SO libspdk_nvmf.so.20.0 00:03:23.031 SYMLINK libspdk_nvmf.so 00:03:23.321 CC module/env_dpdk/env_dpdk_rpc.o 00:03:23.321 CC module/scheduler/gscheduler/gscheduler.o 00:03:23.583 CC module/accel/ioat/accel_ioat.o 00:03:23.583 CC module/blob/bdev/blob_bdev.o 00:03:23.583 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:23.583 CC module/accel/error/accel_error.o 00:03:23.583 CC module/sock/posix/posix.o 00:03:23.583 CC module/keyring/file/keyring.o 00:03:23.583 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:23.583 CC module/fsdev/aio/fsdev_aio.o 00:03:23.583 LIB libspdk_env_dpdk_rpc.a 00:03:23.583 SO libspdk_env_dpdk_rpc.so.6.0 00:03:23.583 SYMLINK libspdk_env_dpdk_rpc.so 00:03:23.583 CC module/keyring/file/keyring_rpc.o 00:03:23.583 CC module/accel/ioat/accel_ioat_rpc.o 00:03:23.583 LIB libspdk_scheduler_gscheduler.a 00:03:23.583 LIB libspdk_scheduler_dpdk_governor.a 00:03:23.583 SO libspdk_scheduler_gscheduler.so.4.0 00:03:23.583 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:23.583 LIB libspdk_scheduler_dynamic.a 00:03:23.583 CC module/accel/error/accel_error_rpc.o 00:03:23.583 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:23.583 SYMLINK libspdk_scheduler_gscheduler.so 00:03:23.583 SO libspdk_scheduler_dynamic.so.4.0 00:03:23.842 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:23.842 LIB libspdk_keyring_file.a 00:03:23.842 LIB libspdk_accel_ioat.a 00:03:23.842 SYMLINK libspdk_scheduler_dynamic.so 00:03:23.842 CC module/fsdev/aio/linux_aio_mgr.o 00:03:23.842 SO libspdk_keyring_file.so.2.0 00:03:23.842 LIB libspdk_blob_bdev.a 00:03:23.842 SO libspdk_accel_ioat.so.6.0 00:03:23.842 SO libspdk_blob_bdev.so.11.0 00:03:23.842 LIB libspdk_accel_error.a 00:03:23.842 CC module/accel/dsa/accel_dsa.o 00:03:23.842 SYMLINK libspdk_accel_ioat.so 00:03:23.842 CC module/accel/iaa/accel_iaa.o 00:03:23.842 SYMLINK libspdk_keyring_file.so 00:03:23.842 CC module/accel/dsa/accel_dsa_rpc.o 00:03:23.842 SO libspdk_accel_error.so.2.0 00:03:23.842 SYMLINK libspdk_blob_bdev.so 00:03:23.842 CC module/accel/iaa/accel_iaa_rpc.o 00:03:23.842 SYMLINK libspdk_accel_error.so 00:03:24.101 CC module/keyring/linux/keyring.o 00:03:24.101 LIB libspdk_accel_iaa.a 00:03:24.101 SO libspdk_accel_iaa.so.3.0 00:03:24.101 CC module/bdev/delay/vbdev_delay.o 00:03:24.101 CC module/bdev/error/vbdev_error.o 00:03:24.101 CC module/bdev/gpt/gpt.o 00:03:24.101 LIB libspdk_accel_dsa.a 00:03:24.101 SYMLINK libspdk_accel_iaa.so 00:03:24.101 CC module/bdev/error/vbdev_error_rpc.o 00:03:24.101 CC module/blobfs/bdev/blobfs_bdev.o 00:03:24.101 CC module/keyring/linux/keyring_rpc.o 00:03:24.360 SO libspdk_accel_dsa.so.5.0 00:03:24.360 CC module/bdev/lvol/vbdev_lvol.o 00:03:24.360 LIB libspdk_fsdev_aio.a 00:03:24.360 SYMLINK libspdk_accel_dsa.so 00:03:24.360 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:24.360 LIB libspdk_keyring_linux.a 00:03:24.360 SO libspdk_fsdev_aio.so.1.0 00:03:24.360 LIB libspdk_sock_posix.a 00:03:24.360 SO libspdk_keyring_linux.so.1.0 00:03:24.360 SO libspdk_sock_posix.so.6.0 00:03:24.360 CC module/bdev/gpt/vbdev_gpt.o 00:03:24.360 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:24.360 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:24.360 SYMLINK libspdk_keyring_linux.so 00:03:24.360 SYMLINK libspdk_fsdev_aio.so 00:03:24.619 SYMLINK libspdk_sock_posix.so 00:03:24.619 LIB libspdk_bdev_error.a 00:03:24.619 SO libspdk_bdev_error.so.6.0 00:03:24.619 LIB libspdk_blobfs_bdev.a 00:03:24.619 SO libspdk_blobfs_bdev.so.6.0 00:03:24.619 SYMLINK libspdk_bdev_error.so 00:03:24.619 CC module/bdev/malloc/bdev_malloc.o 00:03:24.619 CC module/bdev/null/bdev_null.o 00:03:24.619 SYMLINK libspdk_blobfs_bdev.so 00:03:24.619 CC module/bdev/null/bdev_null_rpc.o 00:03:24.619 LIB libspdk_bdev_delay.a 00:03:24.619 CC module/bdev/nvme/bdev_nvme.o 00:03:24.619 SO libspdk_bdev_delay.so.6.0 00:03:24.877 LIB libspdk_bdev_gpt.a 00:03:24.877 SO libspdk_bdev_gpt.so.6.0 00:03:24.877 CC module/bdev/passthru/vbdev_passthru.o 00:03:24.877 SYMLINK libspdk_bdev_delay.so 00:03:24.877 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:24.877 CC module/bdev/raid/bdev_raid.o 00:03:24.877 SYMLINK libspdk_bdev_gpt.so 00:03:24.877 LIB libspdk_bdev_lvol.a 00:03:24.877 SO libspdk_bdev_lvol.so.6.0 00:03:24.877 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.140 LIB libspdk_bdev_null.a 00:03:25.140 SO libspdk_bdev_null.so.6.0 00:03:25.140 CC module/bdev/split/vbdev_split.o 00:03:25.140 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:25.140 SYMLINK libspdk_bdev_lvol.so 00:03:25.140 CC module/bdev/aio/bdev_aio.o 00:03:25.140 CC module/bdev/split/vbdev_split_rpc.o 00:03:25.140 SYMLINK libspdk_bdev_null.so 00:03:25.140 CC module/bdev/raid/bdev_raid_rpc.o 00:03:25.140 CC module/bdev/raid/bdev_raid_sb.o 00:03:25.140 LIB libspdk_bdev_passthru.a 00:03:25.140 LIB libspdk_bdev_malloc.a 00:03:25.140 SO libspdk_bdev_passthru.so.6.0 00:03:25.140 SO libspdk_bdev_malloc.so.6.0 00:03:25.140 SYMLINK libspdk_bdev_passthru.so 00:03:25.140 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.140 CC module/bdev/raid/raid0.o 00:03:25.399 SYMLINK libspdk_bdev_malloc.so 00:03:25.399 CC module/bdev/raid/raid1.o 00:03:25.399 LIB libspdk_bdev_split.a 00:03:25.399 SO libspdk_bdev_split.so.6.0 00:03:25.399 CC module/bdev/nvme/nvme_rpc.o 00:03:25.399 SYMLINK libspdk_bdev_split.so 00:03:25.399 CC module/bdev/nvme/bdev_mdns_client.o 00:03:25.399 CC module/bdev/nvme/vbdev_opal.o 00:03:25.399 CC module/bdev/aio/bdev_aio_rpc.o 00:03:25.399 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:25.658 CC module/bdev/raid/concat.o 00:03:25.658 CC module/bdev/raid/raid5f.o 00:03:25.658 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:25.658 LIB libspdk_bdev_aio.a 00:03:25.658 LIB libspdk_bdev_zone_block.a 00:03:25.658 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:25.658 SO libspdk_bdev_aio.so.6.0 00:03:25.658 SO libspdk_bdev_zone_block.so.6.0 00:03:25.917 SYMLINK libspdk_bdev_zone_block.so 00:03:25.917 SYMLINK libspdk_bdev_aio.so 00:03:25.917 CC module/bdev/ftl/bdev_ftl.o 00:03:25.917 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:25.917 CC module/bdev/iscsi/bdev_iscsi.o 00:03:25.917 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:25.917 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:25.917 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.176 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.176 LIB libspdk_bdev_raid.a 00:03:26.176 SO libspdk_bdev_raid.so.6.0 00:03:26.435 LIB libspdk_bdev_ftl.a 00:03:26.435 SO libspdk_bdev_ftl.so.6.0 00:03:26.435 SYMLINK libspdk_bdev_raid.so 00:03:26.435 LIB libspdk_bdev_iscsi.a 00:03:26.435 SYMLINK libspdk_bdev_ftl.so 00:03:26.435 SO libspdk_bdev_iscsi.so.6.0 00:03:26.435 SYMLINK libspdk_bdev_iscsi.so 00:03:26.694 LIB libspdk_bdev_virtio.a 00:03:26.694 SO libspdk_bdev_virtio.so.6.0 00:03:26.694 SYMLINK libspdk_bdev_virtio.so 00:03:28.069 LIB libspdk_bdev_nvme.a 00:03:28.069 SO libspdk_bdev_nvme.so.7.1 00:03:28.069 SYMLINK libspdk_bdev_nvme.so 00:03:28.636 CC module/event/subsystems/vmd/vmd.o 00:03:28.636 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.636 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.636 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.636 CC module/event/subsystems/keyring/keyring.o 00:03:28.636 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.636 CC module/event/subsystems/fsdev/fsdev.o 00:03:28.636 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.636 CC module/event/subsystems/sock/sock.o 00:03:28.636 LIB libspdk_event_sock.a 00:03:28.636 LIB libspdk_event_vhost_blk.a 00:03:28.636 LIB libspdk_event_scheduler.a 00:03:28.636 LIB libspdk_event_keyring.a 00:03:28.636 SO libspdk_event_sock.so.5.0 00:03:28.636 LIB libspdk_event_vmd.a 00:03:28.636 LIB libspdk_event_fsdev.a 00:03:28.636 SO libspdk_event_vhost_blk.so.3.0 00:03:28.636 SO libspdk_event_scheduler.so.4.0 00:03:28.636 SO libspdk_event_keyring.so.1.0 00:03:28.636 LIB libspdk_event_iobuf.a 00:03:28.636 SO libspdk_event_fsdev.so.1.0 00:03:28.636 SO libspdk_event_vmd.so.6.0 00:03:28.895 SYMLINK libspdk_event_sock.so 00:03:28.895 SO libspdk_event_iobuf.so.3.0 00:03:28.895 SYMLINK libspdk_event_scheduler.so 00:03:28.895 SYMLINK libspdk_event_keyring.so 00:03:28.895 SYMLINK libspdk_event_vhost_blk.so 00:03:28.895 SYMLINK libspdk_event_vmd.so 00:03:28.895 SYMLINK libspdk_event_fsdev.so 00:03:28.895 SYMLINK libspdk_event_iobuf.so 00:03:29.154 CC module/event/subsystems/accel/accel.o 00:03:29.413 LIB libspdk_event_accel.a 00:03:29.413 SO libspdk_event_accel.so.6.0 00:03:29.413 SYMLINK libspdk_event_accel.so 00:03:29.672 CC module/event/subsystems/bdev/bdev.o 00:03:29.931 LIB libspdk_event_bdev.a 00:03:29.931 SO libspdk_event_bdev.so.6.0 00:03:29.931 SYMLINK libspdk_event_bdev.so 00:03:30.190 CC module/event/subsystems/nbd/nbd.o 00:03:30.190 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.190 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.190 CC module/event/subsystems/ublk/ublk.o 00:03:30.190 CC module/event/subsystems/scsi/scsi.o 00:03:30.449 LIB libspdk_event_nbd.a 00:03:30.449 LIB libspdk_event_ublk.a 00:03:30.449 SO libspdk_event_nbd.so.6.0 00:03:30.449 LIB libspdk_event_scsi.a 00:03:30.449 SO libspdk_event_ublk.so.3.0 00:03:30.449 SO libspdk_event_scsi.so.6.0 00:03:30.449 SYMLINK libspdk_event_nbd.so 00:03:30.449 SYMLINK libspdk_event_ublk.so 00:03:30.449 LIB libspdk_event_nvmf.a 00:03:30.449 SYMLINK libspdk_event_scsi.so 00:03:30.708 SO libspdk_event_nvmf.so.6.0 00:03:30.708 SYMLINK libspdk_event_nvmf.so 00:03:30.708 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.708 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.967 LIB libspdk_event_vhost_scsi.a 00:03:30.967 LIB libspdk_event_iscsi.a 00:03:30.967 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.967 SO libspdk_event_iscsi.so.6.0 00:03:31.226 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.226 SYMLINK libspdk_event_iscsi.so 00:03:31.226 SO libspdk.so.6.0 00:03:31.226 SYMLINK libspdk.so 00:03:31.485 CXX app/trace/trace.o 00:03:31.485 CC app/spdk_lspci/spdk_lspci.o 00:03:31.485 CC app/spdk_nvme_perf/perf.o 00:03:31.485 CC app/spdk_nvme_identify/identify.o 00:03:31.485 CC app/trace_record/trace_record.o 00:03:31.485 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.485 CC app/nvmf_tgt/nvmf_main.o 00:03:31.744 CC app/spdk_tgt/spdk_tgt.o 00:03:31.744 CC test/thread/poller_perf/poller_perf.o 00:03:31.744 CC examples/util/zipf/zipf.o 00:03:31.744 LINK spdk_lspci 00:03:31.744 LINK poller_perf 00:03:31.744 LINK zipf 00:03:31.744 LINK nvmf_tgt 00:03:32.003 LINK spdk_trace_record 00:03:32.003 LINK spdk_tgt 00:03:32.003 LINK iscsi_tgt 00:03:32.003 LINK spdk_trace 00:03:32.003 CC examples/ioat/perf/perf.o 00:03:32.003 CC examples/ioat/verify/verify.o 00:03:32.003 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.262 CC app/spdk_top/spdk_top.o 00:03:32.262 CC test/dma/test_dma/test_dma.o 00:03:32.262 CC app/spdk_dd/spdk_dd.o 00:03:32.262 LINK ioat_perf 00:03:32.262 LINK spdk_nvme_discover 00:03:32.262 LINK verify 00:03:32.521 CC app/fio/nvme/fio_plugin.o 00:03:32.521 CC test/app/bdev_svc/bdev_svc.o 00:03:32.521 LINK bdev_svc 00:03:32.521 CC test/app/histogram_perf/histogram_perf.o 00:03:32.521 LINK spdk_nvme_identify 00:03:32.780 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.780 LINK spdk_dd 00:03:32.780 LINK spdk_nvme_perf 00:03:32.780 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.780 LINK histogram_perf 00:03:32.780 LINK test_dma 00:03:32.780 LINK interrupt_tgt 00:03:33.039 CC test/app/jsoncat/jsoncat.o 00:03:33.039 TEST_HEADER include/spdk/accel.h 00:03:33.039 TEST_HEADER include/spdk/accel_module.h 00:03:33.039 TEST_HEADER include/spdk/assert.h 00:03:33.039 TEST_HEADER include/spdk/barrier.h 00:03:33.040 TEST_HEADER include/spdk/base64.h 00:03:33.040 TEST_HEADER include/spdk/bdev.h 00:03:33.040 TEST_HEADER include/spdk/bdev_module.h 00:03:33.040 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.040 TEST_HEADER include/spdk/bit_array.h 00:03:33.040 TEST_HEADER include/spdk/bit_pool.h 00:03:33.040 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.040 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.040 TEST_HEADER include/spdk/blobfs.h 00:03:33.040 TEST_HEADER include/spdk/blob.h 00:03:33.040 TEST_HEADER include/spdk/conf.h 00:03:33.040 TEST_HEADER include/spdk/config.h 00:03:33.040 TEST_HEADER include/spdk/cpuset.h 00:03:33.040 TEST_HEADER include/spdk/crc16.h 00:03:33.040 TEST_HEADER include/spdk/crc32.h 00:03:33.040 TEST_HEADER include/spdk/crc64.h 00:03:33.040 TEST_HEADER include/spdk/dif.h 00:03:33.040 TEST_HEADER include/spdk/dma.h 00:03:33.040 CC test/app/stub/stub.o 00:03:33.040 TEST_HEADER include/spdk/endian.h 00:03:33.040 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.040 TEST_HEADER include/spdk/env.h 00:03:33.040 TEST_HEADER include/spdk/event.h 00:03:33.040 TEST_HEADER include/spdk/fd_group.h 00:03:33.040 TEST_HEADER include/spdk/fd.h 00:03:33.040 TEST_HEADER include/spdk/file.h 00:03:33.040 TEST_HEADER include/spdk/fsdev.h 00:03:33.040 TEST_HEADER include/spdk/fsdev_module.h 00:03:33.040 TEST_HEADER include/spdk/ftl.h 00:03:33.040 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:33.040 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.040 TEST_HEADER include/spdk/hexlify.h 00:03:33.040 TEST_HEADER include/spdk/histogram_data.h 00:03:33.040 TEST_HEADER include/spdk/idxd.h 00:03:33.040 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.040 TEST_HEADER include/spdk/init.h 00:03:33.040 TEST_HEADER include/spdk/ioat.h 00:03:33.040 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.040 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.040 TEST_HEADER include/spdk/json.h 00:03:33.040 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.040 TEST_HEADER include/spdk/keyring.h 00:03:33.040 TEST_HEADER include/spdk/keyring_module.h 00:03:33.040 TEST_HEADER include/spdk/likely.h 00:03:33.040 TEST_HEADER include/spdk/log.h 00:03:33.040 TEST_HEADER include/spdk/lvol.h 00:03:33.040 TEST_HEADER include/spdk/md5.h 00:03:33.040 TEST_HEADER include/spdk/memory.h 00:03:33.040 TEST_HEADER include/spdk/mmio.h 00:03:33.040 TEST_HEADER include/spdk/nbd.h 00:03:33.040 CC app/vhost/vhost.o 00:03:33.040 TEST_HEADER include/spdk/net.h 00:03:33.040 TEST_HEADER include/spdk/notify.h 00:03:33.040 TEST_HEADER include/spdk/nvme.h 00:03:33.040 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.040 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.040 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.040 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.040 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.040 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.040 LINK jsoncat 00:03:33.040 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.040 LINK spdk_nvme 00:03:33.040 TEST_HEADER include/spdk/nvmf.h 00:03:33.040 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.040 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.040 TEST_HEADER include/spdk/opal.h 00:03:33.040 TEST_HEADER include/spdk/opal_spec.h 00:03:33.040 TEST_HEADER include/spdk/pci_ids.h 00:03:33.040 TEST_HEADER include/spdk/pipe.h 00:03:33.040 TEST_HEADER include/spdk/queue.h 00:03:33.040 TEST_HEADER include/spdk/reduce.h 00:03:33.040 TEST_HEADER include/spdk/rpc.h 00:03:33.040 TEST_HEADER include/spdk/scheduler.h 00:03:33.040 TEST_HEADER include/spdk/scsi.h 00:03:33.040 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.040 TEST_HEADER include/spdk/sock.h 00:03:33.040 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.040 TEST_HEADER include/spdk/stdinc.h 00:03:33.040 TEST_HEADER include/spdk/string.h 00:03:33.040 TEST_HEADER include/spdk/thread.h 00:03:33.040 TEST_HEADER include/spdk/trace.h 00:03:33.040 TEST_HEADER include/spdk/trace_parser.h 00:03:33.040 TEST_HEADER include/spdk/tree.h 00:03:33.040 TEST_HEADER include/spdk/ublk.h 00:03:33.040 TEST_HEADER include/spdk/util.h 00:03:33.040 TEST_HEADER include/spdk/uuid.h 00:03:33.040 TEST_HEADER include/spdk/version.h 00:03:33.040 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.040 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.040 TEST_HEADER include/spdk/vhost.h 00:03:33.040 TEST_HEADER include/spdk/vmd.h 00:03:33.040 TEST_HEADER include/spdk/xor.h 00:03:33.040 TEST_HEADER include/spdk/zipf.h 00:03:33.299 CXX test/cpp_headers/accel.o 00:03:33.299 LINK stub 00:03:33.299 CXX test/cpp_headers/accel_module.o 00:03:33.299 CC examples/sock/hello_world/hello_sock.o 00:03:33.299 LINK nvme_fuzz 00:03:33.299 LINK vhost 00:03:33.299 CC examples/thread/thread/thread_ex.o 00:03:33.299 LINK spdk_top 00:03:33.299 CC app/fio/bdev/fio_plugin.o 00:03:33.558 CC test/env/vtophys/vtophys.o 00:03:33.558 CXX test/cpp_headers/assert.o 00:03:33.558 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.558 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.558 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:33.558 LINK thread 00:03:33.558 LINK hello_sock 00:03:33.558 LINK vtophys 00:03:33.558 CXX test/cpp_headers/barrier.o 00:03:33.558 LINK env_dpdk_post_init 00:03:33.558 CC test/event/event_perf/event_perf.o 00:03:33.817 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:33.817 CXX test/cpp_headers/base64.o 00:03:33.817 CXX test/cpp_headers/bdev.o 00:03:33.817 LINK mem_callbacks 00:03:33.817 CXX test/cpp_headers/bdev_module.o 00:03:33.817 LINK event_perf 00:03:33.817 LINK spdk_bdev 00:03:34.076 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.076 CC examples/idxd/perf/perf.o 00:03:34.076 CC test/env/memory/memory_ut.o 00:03:34.076 CXX test/cpp_headers/bdev_zone.o 00:03:34.076 CC test/env/pci/pci_ut.o 00:03:34.076 CC test/event/reactor/reactor.o 00:03:34.076 CC test/event/reactor_perf/reactor_perf.o 00:03:34.076 LINK lsvmd 00:03:34.347 LINK reactor 00:03:34.347 LINK reactor_perf 00:03:34.347 CXX test/cpp_headers/bit_array.o 00:03:34.347 LINK vhost_fuzz 00:03:34.347 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:34.347 CXX test/cpp_headers/bit_pool.o 00:03:34.347 LINK idxd_perf 00:03:34.619 CC examples/vmd/led/led.o 00:03:34.619 CC test/event/app_repeat/app_repeat.o 00:03:34.619 LINK pci_ut 00:03:34.619 CC test/event/scheduler/scheduler.o 00:03:34.619 CXX test/cpp_headers/blob_bdev.o 00:03:34.619 LINK hello_fsdev 00:03:34.619 CC test/nvme/aer/aer.o 00:03:34.619 LINK led 00:03:34.619 LINK app_repeat 00:03:34.879 CC examples/accel/perf/accel_perf.o 00:03:34.879 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.879 LINK scheduler 00:03:34.879 CXX test/cpp_headers/blobfs.o 00:03:34.879 CC test/nvme/reset/reset.o 00:03:34.879 CC test/nvme/sgl/sgl.o 00:03:34.879 CC test/rpc_client/rpc_client_test.o 00:03:35.137 LINK aer 00:03:35.137 CXX test/cpp_headers/blob.o 00:03:35.137 CC test/nvme/e2edp/nvme_dp.o 00:03:35.137 LINK rpc_client_test 00:03:35.137 LINK reset 00:03:35.396 CC test/nvme/overhead/overhead.o 00:03:35.396 CC test/accel/dif/dif.o 00:03:35.396 CXX test/cpp_headers/conf.o 00:03:35.396 LINK sgl 00:03:35.396 CXX test/cpp_headers/config.o 00:03:35.396 LINK memory_ut 00:03:35.396 LINK nvme_dp 00:03:35.397 CXX test/cpp_headers/cpuset.o 00:03:35.397 CC test/nvme/err_injection/err_injection.o 00:03:35.397 LINK accel_perf 00:03:35.655 CC test/nvme/startup/startup.o 00:03:35.655 CC test/nvme/reserve/reserve.o 00:03:35.655 CXX test/cpp_headers/crc16.o 00:03:35.655 LINK overhead 00:03:35.655 LINK err_injection 00:03:35.655 LINK startup 00:03:35.913 LINK iscsi_fuzz 00:03:35.913 CXX test/cpp_headers/crc32.o 00:03:35.913 LINK reserve 00:03:35.913 CC test/blobfs/mkfs/mkfs.o 00:03:35.913 CC examples/blob/hello_world/hello_blob.o 00:03:35.913 CC test/lvol/esnap/esnap.o 00:03:35.913 CC examples/nvme/hello_world/hello_world.o 00:03:35.913 CC test/nvme/simple_copy/simple_copy.o 00:03:35.913 CXX test/cpp_headers/crc64.o 00:03:36.172 CC examples/nvme/reconnect/reconnect.o 00:03:36.173 LINK mkfs 00:03:36.173 CC examples/blob/cli/blobcli.o 00:03:36.173 LINK dif 00:03:36.173 CXX test/cpp_headers/dif.o 00:03:36.173 LINK hello_blob 00:03:36.173 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.173 LINK hello_world 00:03:36.432 LINK simple_copy 00:03:36.433 CXX test/cpp_headers/dma.o 00:03:36.433 CXX test/cpp_headers/endian.o 00:03:36.433 CXX test/cpp_headers/env_dpdk.o 00:03:36.433 CXX test/cpp_headers/env.o 00:03:36.433 LINK reconnect 00:03:36.433 LINK hello_bdev 00:03:36.692 CC test/nvme/connect_stress/connect_stress.o 00:03:36.692 CC test/bdev/bdevio/bdevio.o 00:03:36.692 CC test/nvme/boot_partition/boot_partition.o 00:03:36.692 CXX test/cpp_headers/event.o 00:03:36.692 CC test/nvme/compliance/nvme_compliance.o 00:03:36.692 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.692 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.692 LINK blobcli 00:03:36.692 LINK connect_stress 00:03:36.951 LINK boot_partition 00:03:36.951 CXX test/cpp_headers/fd_group.o 00:03:36.951 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.951 LINK fused_ordering 00:03:36.951 CXX test/cpp_headers/fd.o 00:03:36.951 CXX test/cpp_headers/file.o 00:03:36.951 CXX test/cpp_headers/fsdev.o 00:03:37.210 LINK nvme_compliance 00:03:37.210 CC examples/nvme/arbitration/arbitration.o 00:03:37.211 LINK bdevio 00:03:37.211 CXX test/cpp_headers/fsdev_module.o 00:03:37.211 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.211 CC test/nvme/fdp/fdp.o 00:03:37.211 CXX test/cpp_headers/ftl.o 00:03:37.211 CC test/nvme/cuse/cuse.o 00:03:37.470 CXX test/cpp_headers/fuse_dispatcher.o 00:03:37.470 LINK nvme_manage 00:03:37.470 LINK doorbell_aers 00:03:37.470 CC examples/nvme/hotplug/hotplug.o 00:03:37.470 CXX test/cpp_headers/gpt_spec.o 00:03:37.470 LINK arbitration 00:03:37.470 CXX test/cpp_headers/hexlify.o 00:03:37.729 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.729 CXX test/cpp_headers/histogram_data.o 00:03:37.729 LINK fdp 00:03:37.729 CC examples/nvme/abort/abort.o 00:03:37.729 CXX test/cpp_headers/idxd.o 00:03:37.729 LINK hotplug 00:03:37.729 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.729 LINK cmb_copy 00:03:37.729 CXX test/cpp_headers/idxd_spec.o 00:03:37.729 CXX test/cpp_headers/init.o 00:03:37.988 CXX test/cpp_headers/ioat.o 00:03:37.988 CXX test/cpp_headers/ioat_spec.o 00:03:37.988 LINK pmr_persistence 00:03:37.988 LINK bdevperf 00:03:37.988 CXX test/cpp_headers/iscsi_spec.o 00:03:37.988 CXX test/cpp_headers/json.o 00:03:37.988 CXX test/cpp_headers/jsonrpc.o 00:03:37.988 CXX test/cpp_headers/keyring.o 00:03:37.988 CXX test/cpp_headers/keyring_module.o 00:03:37.988 CXX test/cpp_headers/likely.o 00:03:38.248 LINK abort 00:03:38.248 CXX test/cpp_headers/log.o 00:03:38.248 CXX test/cpp_headers/lvol.o 00:03:38.248 CXX test/cpp_headers/md5.o 00:03:38.248 CXX test/cpp_headers/memory.o 00:03:38.248 CXX test/cpp_headers/mmio.o 00:03:38.248 CXX test/cpp_headers/nbd.o 00:03:38.248 CXX test/cpp_headers/net.o 00:03:38.248 CXX test/cpp_headers/notify.o 00:03:38.248 CXX test/cpp_headers/nvme.o 00:03:38.248 CXX test/cpp_headers/nvme_intel.o 00:03:38.507 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.507 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.507 CXX test/cpp_headers/nvme_spec.o 00:03:38.507 CXX test/cpp_headers/nvme_zns.o 00:03:38.507 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.507 CC examples/nvmf/nvmf/nvmf.o 00:03:38.507 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.507 CXX test/cpp_headers/nvmf.o 00:03:38.507 CXX test/cpp_headers/nvmf_spec.o 00:03:38.766 CXX test/cpp_headers/nvmf_transport.o 00:03:38.766 CXX test/cpp_headers/opal.o 00:03:38.766 CXX test/cpp_headers/opal_spec.o 00:03:38.766 CXX test/cpp_headers/pci_ids.o 00:03:38.766 CXX test/cpp_headers/pipe.o 00:03:38.766 CXX test/cpp_headers/queue.o 00:03:38.766 CXX test/cpp_headers/reduce.o 00:03:38.766 CXX test/cpp_headers/rpc.o 00:03:38.766 LINK cuse 00:03:38.766 CXX test/cpp_headers/scheduler.o 00:03:38.766 LINK nvmf 00:03:38.766 CXX test/cpp_headers/scsi.o 00:03:38.766 CXX test/cpp_headers/scsi_spec.o 00:03:39.025 CXX test/cpp_headers/sock.o 00:03:39.025 CXX test/cpp_headers/stdinc.o 00:03:39.025 CXX test/cpp_headers/string.o 00:03:39.025 CXX test/cpp_headers/thread.o 00:03:39.025 CXX test/cpp_headers/trace.o 00:03:39.025 CXX test/cpp_headers/trace_parser.o 00:03:39.025 CXX test/cpp_headers/tree.o 00:03:39.025 CXX test/cpp_headers/ublk.o 00:03:39.025 CXX test/cpp_headers/util.o 00:03:39.025 CXX test/cpp_headers/uuid.o 00:03:39.025 CXX test/cpp_headers/version.o 00:03:39.025 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.025 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.284 CXX test/cpp_headers/vhost.o 00:03:39.284 CXX test/cpp_headers/vmd.o 00:03:39.285 CXX test/cpp_headers/xor.o 00:03:39.285 CXX test/cpp_headers/zipf.o 00:03:42.573 LINK esnap 00:03:43.141 00:03:43.141 real 1m33.171s 00:03:43.141 user 8m24.095s 00:03:43.141 sys 1m47.234s 00:03:43.141 09:57:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.141 ************************************ 00:03:43.141 END TEST make 00:03:43.141 ************************************ 00:03:43.141 09:57:57 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.141 09:57:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.141 09:57:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.141 09:57:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.141 09:57:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.141 09:57:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.141 09:57:57 -- pm/common@44 -- $ pid=5246 00:03:43.142 09:57:57 -- pm/common@50 -- $ kill -TERM 5246 00:03:43.142 09:57:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.142 09:57:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.142 09:57:57 -- pm/common@44 -- $ pid=5248 00:03:43.142 09:57:57 -- pm/common@50 -- $ kill -TERM 5248 00:03:43.142 09:57:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.142 09:57:57 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:43.142 09:57:57 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.142 09:57:57 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.142 09:57:57 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.142 09:57:57 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.142 09:57:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.142 09:57:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.142 09:57:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.142 09:57:57 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.142 09:57:57 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.142 09:57:57 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.142 09:57:57 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.142 09:57:57 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.142 09:57:57 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.142 09:57:57 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.142 09:57:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.142 09:57:57 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.142 09:57:57 -- scripts/common.sh@345 -- # : 1 00:03:43.142 09:57:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.142 09:57:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.142 09:57:57 -- scripts/common.sh@365 -- # decimal 1 00:03:43.142 09:57:57 -- scripts/common.sh@353 -- # local d=1 00:03:43.142 09:57:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.142 09:57:57 -- scripts/common.sh@355 -- # echo 1 00:03:43.142 09:57:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.142 09:57:57 -- scripts/common.sh@366 -- # decimal 2 00:03:43.142 09:57:57 -- scripts/common.sh@353 -- # local d=2 00:03:43.142 09:57:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.142 09:57:57 -- scripts/common.sh@355 -- # echo 2 00:03:43.142 09:57:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.142 09:57:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.142 09:57:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.142 09:57:57 -- scripts/common.sh@368 -- # return 0 00:03:43.142 09:57:57 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.142 09:57:57 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.142 --rc genhtml_branch_coverage=1 00:03:43.142 --rc genhtml_function_coverage=1 00:03:43.142 --rc genhtml_legend=1 00:03:43.142 --rc geninfo_all_blocks=1 00:03:43.142 --rc geninfo_unexecuted_blocks=1 00:03:43.142 00:03:43.142 ' 00:03:43.142 09:57:57 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.142 --rc genhtml_branch_coverage=1 00:03:43.142 --rc genhtml_function_coverage=1 00:03:43.142 --rc genhtml_legend=1 00:03:43.142 --rc geninfo_all_blocks=1 00:03:43.142 --rc geninfo_unexecuted_blocks=1 00:03:43.142 00:03:43.142 ' 00:03:43.142 09:57:57 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.142 --rc genhtml_branch_coverage=1 00:03:43.142 --rc genhtml_function_coverage=1 00:03:43.142 --rc genhtml_legend=1 00:03:43.142 --rc geninfo_all_blocks=1 00:03:43.142 --rc geninfo_unexecuted_blocks=1 00:03:43.142 00:03:43.142 ' 00:03:43.142 09:57:57 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.142 --rc genhtml_branch_coverage=1 00:03:43.142 --rc genhtml_function_coverage=1 00:03:43.142 --rc genhtml_legend=1 00:03:43.142 --rc geninfo_all_blocks=1 00:03:43.142 --rc geninfo_unexecuted_blocks=1 00:03:43.142 00:03:43.142 ' 00:03:43.142 09:57:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.142 09:57:57 -- nvmf/common.sh@7 -- # uname -s 00:03:43.142 09:57:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.142 09:57:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.142 09:57:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.142 09:57:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.142 09:57:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.142 09:57:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.142 09:57:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.142 09:57:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.142 09:57:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.142 09:57:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.401 09:57:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c78e9af8-b39e-4b71-8f40-2b37c338158f 00:03:43.401 09:57:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=c78e9af8-b39e-4b71-8f40-2b37c338158f 00:03:43.401 09:57:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.401 09:57:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.401 09:57:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.401 09:57:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.401 09:57:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.401 09:57:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.401 09:57:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.401 09:57:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.401 09:57:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.401 09:57:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.401 09:57:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.401 09:57:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.401 09:57:57 -- paths/export.sh@5 -- # export PATH 00:03:43.401 09:57:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.401 09:57:57 -- nvmf/common.sh@51 -- # : 0 00:03:43.401 09:57:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.401 09:57:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.401 09:57:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.401 09:57:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.401 09:57:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.401 09:57:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.401 09:57:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.401 09:57:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.401 09:57:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.401 09:57:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.401 09:57:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.401 09:57:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.401 09:57:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.401 09:57:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.401 09:57:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.401 09:57:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.401 09:57:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.401 09:57:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.401 09:57:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.401 09:57:57 -- spdk/autotest.sh@48 -- # udevadm_pid=54288 00:03:43.401 09:57:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.401 09:57:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.401 09:57:57 -- pm/common@17 -- # local monitor 00:03:43.401 09:57:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.402 09:57:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.402 09:57:57 -- pm/common@25 -- # sleep 1 00:03:43.402 09:57:57 -- pm/common@21 -- # date +%s 00:03:43.402 09:57:57 -- pm/common@21 -- # date +%s 00:03:43.402 09:57:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732010277 00:03:43.402 09:57:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732010277 00:03:43.402 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732010277_collect-cpu-load.pm.log 00:03:43.402 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732010277_collect-vmstat.pm.log 00:03:44.340 09:57:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.340 09:57:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.340 09:57:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.340 09:57:58 -- common/autotest_common.sh@10 -- # set +x 00:03:44.340 09:57:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.340 09:57:58 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.340 09:57:58 -- common/autotest_common.sh@10 -- # set +x 00:03:44.340 09:57:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.340 09:57:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.340 09:57:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.340 09:57:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.340 09:57:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.340 09:57:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.340 09:57:58 -- common/autotest_common.sh@1457 -- # uname 00:03:44.340 09:57:58 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.340 09:57:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.340 09:57:58 -- common/autotest_common.sh@1477 -- # uname 00:03:44.340 09:57:58 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.340 09:57:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.340 09:57:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.599 lcov: LCOV version 1.15 00:03:44.599 09:57:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.518 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.518 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:17.609 09:58:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:17.609 09:58:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.609 09:58:29 -- common/autotest_common.sh@10 -- # set +x 00:04:17.609 09:58:29 -- spdk/autotest.sh@78 -- # rm -f 00:04:17.609 09:58:29 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.609 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:17.609 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:17.609 09:58:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:17.609 09:58:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:17.609 09:58:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:17.609 09:58:29 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:17.609 09:58:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:17.609 09:58:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:17.609 09:58:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:17.609 09:58:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:17.609 09:58:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:17.609 09:58:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:17.609 09:58:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:17.609 09:58:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:17.609 09:58:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:17.609 09:58:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:17.609 09:58:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:17.609 09:58:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:17.609 09:58:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:17.609 09:58:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.609 09:58:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:17.609 09:58:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.609 09:58:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.609 09:58:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:17.609 09:58:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:17.609 09:58:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:17.609 No valid GPT data, bailing 00:04:17.609 09:58:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.609 09:58:29 -- scripts/common.sh@394 -- # pt= 00:04:17.609 09:58:29 -- scripts/common.sh@395 -- # return 1 00:04:17.609 09:58:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:17.609 1+0 records in 00:04:17.609 1+0 records out 00:04:17.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475085 s, 221 MB/s 00:04:17.609 09:58:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.609 09:58:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.609 09:58:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:17.609 09:58:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:17.609 09:58:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:17.609 No valid GPT data, bailing 00:04:17.610 09:58:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:17.610 09:58:30 -- scripts/common.sh@394 -- # pt= 00:04:17.610 09:58:30 -- scripts/common.sh@395 -- # return 1 00:04:17.610 09:58:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:17.610 1+0 records in 00:04:17.610 1+0 records out 00:04:17.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652417 s, 161 MB/s 00:04:17.610 09:58:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.610 09:58:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.610 09:58:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:17.610 09:58:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:17.610 09:58:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:17.610 No valid GPT data, bailing 00:04:17.610 09:58:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:17.610 09:58:30 -- scripts/common.sh@394 -- # pt= 00:04:17.610 09:58:30 -- scripts/common.sh@395 -- # return 1 00:04:17.610 09:58:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:17.610 1+0 records in 00:04:17.610 1+0 records out 00:04:17.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005295 s, 198 MB/s 00:04:17.610 09:58:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.610 09:58:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.610 09:58:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:17.610 09:58:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:17.610 09:58:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:17.610 No valid GPT data, bailing 00:04:17.610 09:58:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:17.610 09:58:30 -- scripts/common.sh@394 -- # pt= 00:04:17.610 09:58:30 -- scripts/common.sh@395 -- # return 1 00:04:17.610 09:58:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:17.610 1+0 records in 00:04:17.610 1+0 records out 00:04:17.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00350089 s, 300 MB/s 00:04:17.610 09:58:30 -- spdk/autotest.sh@105 -- # sync 00:04:17.610 09:58:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.610 09:58:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.610 09:58:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.178 09:58:32 -- spdk/autotest.sh@111 -- # uname -s 00:04:18.178 09:58:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:18.178 09:58:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:18.178 09:58:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.747 Hugepages 00:04:18.747 node hugesize free / total 00:04:18.747 node0 1048576kB 0 / 0 00:04:18.747 node0 2048kB 0 / 0 00:04:18.747 00:04:18.747 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.747 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:19.006 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:19.006 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:19.006 09:58:33 -- spdk/autotest.sh@117 -- # uname -s 00:04:19.006 09:58:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:19.006 09:58:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:19.006 09:58:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.833 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.833 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.833 09:58:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:20.771 09:58:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:20.771 09:58:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:20.771 09:58:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.771 09:58:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:20.771 09:58:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.771 09:58:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.771 09:58:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.771 09:58:34 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.771 09:58:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:21.030 09:58:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:21.030 09:58:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:21.030 09:58:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.304 Waiting for block devices as requested 00:04:21.304 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.592 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.592 09:58:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.592 09:58:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.592 09:58:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.592 09:58:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.592 09:58:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.592 09:58:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1543 -- # continue 00:04:21.592 09:58:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.592 09:58:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.592 09:58:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.592 09:58:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.592 09:58:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.592 09:58:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.592 09:58:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.592 09:58:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.592 09:58:35 -- common/autotest_common.sh@1543 -- # continue 00:04:21.592 09:58:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.592 09:58:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.592 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.592 09:58:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.592 09:58:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.592 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.592 09:58:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.529 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.529 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.529 09:58:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.529 09:58:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.529 09:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.529 09:58:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.529 09:58:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:22.529 09:58:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.529 09:58:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:22.529 09:58:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:22.529 09:58:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:22.529 09:58:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.529 09:58:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:22.529 09:58:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:22.529 09:58:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:22.529 09:58:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.529 09:58:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.529 09:58:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:22.788 09:58:36 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:22.788 09:58:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:22.788 09:58:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.788 09:58:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.788 09:58:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.788 09:58:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.788 09:58:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.788 09:58:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.788 09:58:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.788 09:58:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.788 09:58:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:22.788 09:58:36 -- common/autotest_common.sh@1572 -- # return 0 00:04:22.788 09:58:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:22.788 09:58:36 -- common/autotest_common.sh@1580 -- # return 0 00:04:22.788 09:58:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.788 09:58:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.788 09:58:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.788 09:58:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.788 09:58:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.788 09:58:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.788 09:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.788 09:58:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:22.788 09:58:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.788 09:58:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.788 09:58:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.788 09:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.788 ************************************ 00:04:22.788 START TEST env 00:04:22.788 ************************************ 00:04:22.788 09:58:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.788 * Looking for test storage... 00:04:22.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.788 09:58:36 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.788 09:58:36 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.788 09:58:36 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.788 09:58:36 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.788 09:58:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.788 09:58:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.788 09:58:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.788 09:58:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.788 09:58:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.788 09:58:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.788 09:58:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.788 09:58:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.788 09:58:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.788 09:58:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.788 09:58:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.788 09:58:36 env -- scripts/common.sh@344 -- # case "$op" in 00:04:22.788 09:58:36 env -- scripts/common.sh@345 -- # : 1 00:04:22.788 09:58:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.788 09:58:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.788 09:58:36 env -- scripts/common.sh@365 -- # decimal 1 00:04:22.788 09:58:36 env -- scripts/common.sh@353 -- # local d=1 00:04:22.788 09:58:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.788 09:58:36 env -- scripts/common.sh@355 -- # echo 1 00:04:22.788 09:58:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.788 09:58:36 env -- scripts/common.sh@366 -- # decimal 2 00:04:22.788 09:58:36 env -- scripts/common.sh@353 -- # local d=2 00:04:22.788 09:58:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.788 09:58:36 env -- scripts/common.sh@355 -- # echo 2 00:04:22.788 09:58:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.788 09:58:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.789 09:58:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.789 09:58:37 env -- scripts/common.sh@368 -- # return 0 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.789 --rc genhtml_branch_coverage=1 00:04:22.789 --rc genhtml_function_coverage=1 00:04:22.789 --rc genhtml_legend=1 00:04:22.789 --rc geninfo_all_blocks=1 00:04:22.789 --rc geninfo_unexecuted_blocks=1 00:04:22.789 00:04:22.789 ' 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.789 --rc genhtml_branch_coverage=1 00:04:22.789 --rc genhtml_function_coverage=1 00:04:22.789 --rc genhtml_legend=1 00:04:22.789 --rc geninfo_all_blocks=1 00:04:22.789 --rc geninfo_unexecuted_blocks=1 00:04:22.789 00:04:22.789 ' 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.789 --rc genhtml_branch_coverage=1 00:04:22.789 --rc genhtml_function_coverage=1 00:04:22.789 --rc genhtml_legend=1 00:04:22.789 --rc geninfo_all_blocks=1 00:04:22.789 --rc geninfo_unexecuted_blocks=1 00:04:22.789 00:04:22.789 ' 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.789 --rc genhtml_branch_coverage=1 00:04:22.789 --rc genhtml_function_coverage=1 00:04:22.789 --rc genhtml_legend=1 00:04:22.789 --rc geninfo_all_blocks=1 00:04:22.789 --rc geninfo_unexecuted_blocks=1 00:04:22.789 00:04:22.789 ' 00:04:22.789 09:58:37 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.789 09:58:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.789 09:58:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.789 ************************************ 00:04:22.789 START TEST env_memory 00:04:22.789 ************************************ 00:04:22.789 09:58:37 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.048 00:04:23.048 00:04:23.048 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.048 http://cunit.sourceforge.net/ 00:04:23.048 00:04:23.048 00:04:23.048 Suite: memory 00:04:23.048 Test: alloc and free memory map ...[2024-11-19 09:58:37.090775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.048 passed 00:04:23.048 Test: mem map translation ...[2024-11-19 09:58:37.151971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.048 [2024-11-19 09:58:37.152078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.048 [2024-11-19 09:58:37.152177] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.048 [2024-11-19 09:58:37.152212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.048 passed 00:04:23.048 Test: mem map registration ...[2024-11-19 09:58:37.250990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.048 [2024-11-19 09:58:37.251102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.307 passed 00:04:23.307 Test: mem map adjacent registrations ...passed 00:04:23.307 00:04:23.307 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.307 suites 1 1 n/a 0 0 00:04:23.307 tests 4 4 4 0 0 00:04:23.307 asserts 152 152 152 0 n/a 00:04:23.307 00:04:23.307 Elapsed time = 0.327 seconds 00:04:23.307 ************************************ 00:04:23.307 END TEST env_memory 00:04:23.307 ************************************ 00:04:23.307 00:04:23.307 real 0m0.371s 00:04:23.307 user 0m0.332s 00:04:23.307 sys 0m0.031s 00:04:23.307 09:58:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.307 09:58:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.307 09:58:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.307 09:58:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.307 09:58:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.307 09:58:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.307 ************************************ 00:04:23.307 START TEST env_vtophys 00:04:23.307 ************************************ 00:04:23.307 09:58:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.307 EAL: lib.eal log level changed from notice to debug 00:04:23.307 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.307 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.307 EAL: Maximum logical cores by configuration: 128 00:04:23.307 EAL: Detected CPU lcores: 10 00:04:23.307 EAL: Detected NUMA nodes: 1 00:04:23.307 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.307 EAL: Detected shared linkage of DPDK 00:04:23.567 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.567 EAL: Selected IOVA mode 'PA' 00:04:23.567 EAL: Probing VFIO support... 00:04:23.567 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.567 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.567 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.567 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.567 EAL: Setting up physically contiguous memory... 00:04:23.567 EAL: Setting maximum number of open files to 524288 00:04:23.567 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.567 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.567 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.567 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.567 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.567 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.567 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.567 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.567 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.567 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.567 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.567 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.567 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.567 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.567 EAL: Hugepages will be freed exactly as allocated. 00:04:23.567 EAL: No shared files mode enabled, IPC is disabled 00:04:23.567 EAL: No shared files mode enabled, IPC is disabled 00:04:23.567 EAL: TSC frequency is ~2200000 KHz 00:04:23.567 EAL: Main lcore 0 is ready (tid=7f2e2f702a40;cpuset=[0]) 00:04:23.567 EAL: Trying to obtain current memory policy. 00:04:23.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.567 EAL: Restoring previous memory policy: 0 00:04:23.567 EAL: request: mp_malloc_sync 00:04:23.567 EAL: No shared files mode enabled, IPC is disabled 00:04:23.567 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.567 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.567 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.567 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.567 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.567 00:04:23.567 00:04:23.567 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.567 http://cunit.sourceforge.net/ 00:04:23.567 00:04:23.567 00:04:23.567 Suite: components_suite 00:04:24.136 Test: vtophys_malloc_test ...passed 00:04:24.136 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:24.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.136 EAL: Restoring previous memory policy: 4 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was expanded by 4MB 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was shrunk by 4MB 00:04:24.136 EAL: Trying to obtain current memory policy. 00:04:24.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.136 EAL: Restoring previous memory policy: 4 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was expanded by 6MB 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was shrunk by 6MB 00:04:24.136 EAL: Trying to obtain current memory policy. 00:04:24.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.136 EAL: Restoring previous memory policy: 4 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was expanded by 10MB 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was shrunk by 10MB 00:04:24.136 EAL: Trying to obtain current memory policy. 00:04:24.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.136 EAL: Restoring previous memory policy: 4 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was expanded by 18MB 00:04:24.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.136 EAL: request: mp_malloc_sync 00:04:24.136 EAL: No shared files mode enabled, IPC is disabled 00:04:24.136 EAL: Heap on socket 0 was shrunk by 18MB 00:04:24.137 EAL: Trying to obtain current memory policy. 00:04:24.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.137 EAL: Restoring previous memory policy: 4 00:04:24.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.137 EAL: request: mp_malloc_sync 00:04:24.137 EAL: No shared files mode enabled, IPC is disabled 00:04:24.137 EAL: Heap on socket 0 was expanded by 34MB 00:04:24.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.137 EAL: request: mp_malloc_sync 00:04:24.137 EAL: No shared files mode enabled, IPC is disabled 00:04:24.137 EAL: Heap on socket 0 was shrunk by 34MB 00:04:24.396 EAL: Trying to obtain current memory policy. 00:04:24.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.396 EAL: Restoring previous memory policy: 4 00:04:24.396 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.396 EAL: request: mp_malloc_sync 00:04:24.396 EAL: No shared files mode enabled, IPC is disabled 00:04:24.396 EAL: Heap on socket 0 was expanded by 66MB 00:04:24.396 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.396 EAL: request: mp_malloc_sync 00:04:24.396 EAL: No shared files mode enabled, IPC is disabled 00:04:24.396 EAL: Heap on socket 0 was shrunk by 66MB 00:04:24.396 EAL: Trying to obtain current memory policy. 00:04:24.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.655 EAL: Restoring previous memory policy: 4 00:04:24.655 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.655 EAL: request: mp_malloc_sync 00:04:24.655 EAL: No shared files mode enabled, IPC is disabled 00:04:24.655 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.655 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.915 EAL: request: mp_malloc_sync 00:04:24.915 EAL: No shared files mode enabled, IPC is disabled 00:04:24.915 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.915 EAL: Trying to obtain current memory policy. 00:04:24.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.173 EAL: Restoring previous memory policy: 4 00:04:25.173 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.173 EAL: request: mp_malloc_sync 00:04:25.173 EAL: No shared files mode enabled, IPC is disabled 00:04:25.173 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.432 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.432 EAL: request: mp_malloc_sync 00:04:25.432 EAL: No shared files mode enabled, IPC is disabled 00:04:25.432 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.000 EAL: Trying to obtain current memory policy. 00:04:26.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.000 EAL: Restoring previous memory policy: 4 00:04:26.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.000 EAL: request: mp_malloc_sync 00:04:26.000 EAL: No shared files mode enabled, IPC is disabled 00:04:26.000 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.937 EAL: request: mp_malloc_sync 00:04:26.937 EAL: No shared files mode enabled, IPC is disabled 00:04:26.937 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.505 EAL: Trying to obtain current memory policy. 00:04:27.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.074 EAL: Restoring previous memory policy: 4 00:04:28.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.074 EAL: request: mp_malloc_sync 00:04:28.074 EAL: No shared files mode enabled, IPC is disabled 00:04:28.074 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.712 EAL: request: mp_malloc_sync 00:04:29.712 EAL: No shared files mode enabled, IPC is disabled 00:04:29.712 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.091 passed 00:04:31.091 00:04:31.091 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.091 suites 1 1 n/a 0 0 00:04:31.091 tests 2 2 2 0 0 00:04:31.091 asserts 5698 5698 5698 0 n/a 00:04:31.091 00:04:31.091 Elapsed time = 7.425 seconds 00:04:31.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.091 EAL: request: mp_malloc_sync 00:04:31.091 EAL: No shared files mode enabled, IPC is disabled 00:04:31.091 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.091 EAL: No shared files mode enabled, IPC is disabled 00:04:31.091 EAL: No shared files mode enabled, IPC is disabled 00:04:31.091 EAL: No shared files mode enabled, IPC is disabled 00:04:31.091 ************************************ 00:04:31.091 END TEST env_vtophys 00:04:31.091 ************************************ 00:04:31.091 00:04:31.091 real 0m7.769s 00:04:31.091 user 0m6.352s 00:04:31.091 sys 0m1.247s 00:04:31.091 09:58:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.091 09:58:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.091 09:58:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.091 09:58:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.091 09:58:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.091 09:58:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.091 ************************************ 00:04:31.091 START TEST env_pci 00:04:31.091 ************************************ 00:04:31.091 09:58:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.091 00:04:31.091 00:04:31.091 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.091 http://cunit.sourceforge.net/ 00:04:31.091 00:04:31.091 00:04:31.091 Suite: pci 00:04:31.091 Test: pci_hook ...[2024-11-19 09:58:45.308667] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56571 has claimed it 00:04:31.350 passedEAL: Cannot find device (10000:00:01.0) 00:04:31.350 EAL: Failed to attach device on primary process 00:04:31.350 00:04:31.350 00:04:31.350 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.350 suites 1 1 n/a 0 0 00:04:31.350 tests 1 1 1 0 0 00:04:31.350 asserts 25 25 25 0 n/a 00:04:31.350 00:04:31.350 Elapsed time = 0.010 seconds 00:04:31.350 00:04:31.350 real 0m0.091s 00:04:31.350 user 0m0.045s 00:04:31.350 sys 0m0.044s 00:04:31.350 09:58:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.350 ************************************ 00:04:31.350 END TEST env_pci 00:04:31.350 ************************************ 00:04:31.350 09:58:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.350 09:58:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.350 09:58:45 env -- env/env.sh@15 -- # uname 00:04:31.350 09:58:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.350 09:58:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.350 09:58:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.350 09:58:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:31.350 09:58:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.350 09:58:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.350 ************************************ 00:04:31.350 START TEST env_dpdk_post_init 00:04:31.350 ************************************ 00:04:31.350 09:58:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.350 EAL: Detected CPU lcores: 10 00:04:31.350 EAL: Detected NUMA nodes: 1 00:04:31.350 EAL: Detected shared linkage of DPDK 00:04:31.350 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.350 EAL: Selected IOVA mode 'PA' 00:04:31.609 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.609 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:31.609 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:31.609 Starting DPDK initialization... 00:04:31.609 Starting SPDK post initialization... 00:04:31.609 SPDK NVMe probe 00:04:31.609 Attaching to 0000:00:10.0 00:04:31.609 Attaching to 0000:00:11.0 00:04:31.609 Attached to 0000:00:10.0 00:04:31.609 Attached to 0000:00:11.0 00:04:31.609 Cleaning up... 00:04:31.609 00:04:31.609 real 0m0.304s 00:04:31.609 user 0m0.103s 00:04:31.609 sys 0m0.100s 00:04:31.609 09:58:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.609 ************************************ 00:04:31.609 END TEST env_dpdk_post_init 00:04:31.609 ************************************ 00:04:31.610 09:58:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.610 09:58:45 env -- env/env.sh@26 -- # uname 00:04:31.610 09:58:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.610 09:58:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.610 09:58:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.610 09:58:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.610 09:58:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.610 ************************************ 00:04:31.610 START TEST env_mem_callbacks 00:04:31.610 ************************************ 00:04:31.610 09:58:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.610 EAL: Detected CPU lcores: 10 00:04:31.610 EAL: Detected NUMA nodes: 1 00:04:31.610 EAL: Detected shared linkage of DPDK 00:04:31.875 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.875 EAL: Selected IOVA mode 'PA' 00:04:31.875 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.875 00:04:31.875 00:04:31.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.875 http://cunit.sourceforge.net/ 00:04:31.875 00:04:31.875 00:04:31.876 Suite: memory 00:04:31.876 Test: test ... 00:04:31.876 register 0x200000200000 2097152 00:04:31.876 malloc 3145728 00:04:31.876 register 0x200000400000 4194304 00:04:31.876 buf 0x2000004fffc0 len 3145728 PASSED 00:04:31.876 malloc 64 00:04:31.876 buf 0x2000004ffec0 len 64 PASSED 00:04:31.876 malloc 4194304 00:04:31.876 register 0x200000800000 6291456 00:04:31.876 buf 0x2000009fffc0 len 4194304 PASSED 00:04:31.876 free 0x2000004fffc0 3145728 00:04:31.876 free 0x2000004ffec0 64 00:04:31.876 unregister 0x200000400000 4194304 PASSED 00:04:31.876 free 0x2000009fffc0 4194304 00:04:31.876 unregister 0x200000800000 6291456 PASSED 00:04:31.876 malloc 8388608 00:04:31.876 register 0x200000400000 10485760 00:04:31.876 buf 0x2000005fffc0 len 8388608 PASSED 00:04:31.876 free 0x2000005fffc0 8388608 00:04:31.876 unregister 0x200000400000 10485760 PASSED 00:04:31.876 passed 00:04:31.876 00:04:31.876 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.876 suites 1 1 n/a 0 0 00:04:31.876 tests 1 1 1 0 0 00:04:31.876 asserts 15 15 15 0 n/a 00:04:31.876 00:04:31.876 Elapsed time = 0.059 seconds 00:04:31.876 00:04:31.876 real 0m0.275s 00:04:31.876 user 0m0.098s 00:04:31.876 sys 0m0.075s 00:04:31.876 ************************************ 00:04:31.876 END TEST env_mem_callbacks 00:04:31.876 ************************************ 00:04:31.876 09:58:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.876 09:58:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:31.876 00:04:31.876 real 0m9.301s 00:04:31.876 user 0m7.154s 00:04:31.876 sys 0m1.729s 00:04:31.876 09:58:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.876 ************************************ 00:04:31.876 END TEST env 00:04:31.876 ************************************ 00:04:31.876 09:58:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.143 09:58:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.143 09:58:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.143 09:58:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.143 09:58:46 -- common/autotest_common.sh@10 -- # set +x 00:04:32.143 ************************************ 00:04:32.143 START TEST rpc 00:04:32.143 ************************************ 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.143 * Looking for test storage... 00:04:32.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.143 09:58:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.143 09:58:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.143 09:58:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.143 09:58:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.143 09:58:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.143 09:58:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.143 09:58:46 rpc -- scripts/common.sh@345 -- # : 1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.143 09:58:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.143 09:58:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.143 09:58:46 rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.143 09:58:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.143 09:58:46 rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.143 09:58:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.143 09:58:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.143 09:58:46 rpc -- scripts/common.sh@368 -- # return 0 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.143 --rc genhtml_branch_coverage=1 00:04:32.143 --rc genhtml_function_coverage=1 00:04:32.143 --rc genhtml_legend=1 00:04:32.143 --rc geninfo_all_blocks=1 00:04:32.143 --rc geninfo_unexecuted_blocks=1 00:04:32.143 00:04:32.143 ' 00:04:32.143 09:58:46 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.144 --rc genhtml_branch_coverage=1 00:04:32.144 --rc genhtml_function_coverage=1 00:04:32.144 --rc genhtml_legend=1 00:04:32.144 --rc geninfo_all_blocks=1 00:04:32.144 --rc geninfo_unexecuted_blocks=1 00:04:32.144 00:04:32.144 ' 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.144 --rc genhtml_branch_coverage=1 00:04:32.144 --rc genhtml_function_coverage=1 00:04:32.144 --rc genhtml_legend=1 00:04:32.144 --rc geninfo_all_blocks=1 00:04:32.144 --rc geninfo_unexecuted_blocks=1 00:04:32.144 00:04:32.144 ' 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.144 --rc genhtml_branch_coverage=1 00:04:32.144 --rc genhtml_function_coverage=1 00:04:32.144 --rc genhtml_legend=1 00:04:32.144 --rc geninfo_all_blocks=1 00:04:32.144 --rc geninfo_unexecuted_blocks=1 00:04:32.144 00:04:32.144 ' 00:04:32.144 09:58:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56698 00:04:32.144 09:58:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.144 09:58:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.144 09:58:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56698 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 56698 ']' 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.144 09:58:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.403 [2024-11-19 09:58:46.486183] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:04:32.403 [2024-11-19 09:58:46.486395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56698 ] 00:04:32.663 [2024-11-19 09:58:46.674919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.663 [2024-11-19 09:58:46.815593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.663 [2024-11-19 09:58:46.815710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56698' to capture a snapshot of events at runtime. 00:04:32.663 [2024-11-19 09:58:46.815727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.663 [2024-11-19 09:58:46.815743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.663 [2024-11-19 09:58:46.815754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56698 for offline analysis/debug. 00:04:32.663 [2024-11-19 09:58:46.817258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.601 09:58:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.601 09:58:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.601 09:58:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.601 09:58:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.601 09:58:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.601 09:58:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.601 09:58:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.601 09:58:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.601 09:58:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.601 ************************************ 00:04:33.601 START TEST rpc_integrity 00:04:33.601 ************************************ 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.601 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.601 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.601 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.601 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.601 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.601 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.601 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.861 { 00:04:33.861 "name": "Malloc0", 00:04:33.861 "aliases": [ 00:04:33.861 "c2ef23ca-2cf7-4866-9e37-0e9ee282c707" 00:04:33.861 ], 00:04:33.861 "product_name": "Malloc disk", 00:04:33.861 "block_size": 512, 00:04:33.861 "num_blocks": 16384, 00:04:33.861 "uuid": "c2ef23ca-2cf7-4866-9e37-0e9ee282c707", 00:04:33.861 "assigned_rate_limits": { 00:04:33.861 "rw_ios_per_sec": 0, 00:04:33.861 "rw_mbytes_per_sec": 0, 00:04:33.861 "r_mbytes_per_sec": 0, 00:04:33.861 "w_mbytes_per_sec": 0 00:04:33.861 }, 00:04:33.861 "claimed": false, 00:04:33.861 "zoned": false, 00:04:33.861 "supported_io_types": { 00:04:33.861 "read": true, 00:04:33.861 "write": true, 00:04:33.861 "unmap": true, 00:04:33.861 "flush": true, 00:04:33.861 "reset": true, 00:04:33.861 "nvme_admin": false, 00:04:33.861 "nvme_io": false, 00:04:33.861 "nvme_io_md": false, 00:04:33.861 "write_zeroes": true, 00:04:33.861 "zcopy": true, 00:04:33.861 "get_zone_info": false, 00:04:33.861 "zone_management": false, 00:04:33.861 "zone_append": false, 00:04:33.861 "compare": false, 00:04:33.861 "compare_and_write": false, 00:04:33.861 "abort": true, 00:04:33.861 "seek_hole": false, 00:04:33.861 "seek_data": false, 00:04:33.861 "copy": true, 00:04:33.861 "nvme_iov_md": false 00:04:33.861 }, 00:04:33.861 "memory_domains": [ 00:04:33.861 { 00:04:33.861 "dma_device_id": "system", 00:04:33.861 "dma_device_type": 1 00:04:33.861 }, 00:04:33.861 { 00:04:33.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.861 "dma_device_type": 2 00:04:33.861 } 00:04:33.861 ], 00:04:33.861 "driver_specific": {} 00:04:33.861 } 00:04:33.861 ]' 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.861 [2024-11-19 09:58:47.909605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.861 [2024-11-19 09:58:47.909699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.861 [2024-11-19 09:58:47.909730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:33.861 [2024-11-19 09:58:47.909750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.861 [2024-11-19 09:58:47.913115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.861 [2024-11-19 09:58:47.913175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.861 Passthru0 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.861 { 00:04:33.861 "name": "Malloc0", 00:04:33.861 "aliases": [ 00:04:33.861 "c2ef23ca-2cf7-4866-9e37-0e9ee282c707" 00:04:33.861 ], 00:04:33.861 "product_name": "Malloc disk", 00:04:33.861 "block_size": 512, 00:04:33.861 "num_blocks": 16384, 00:04:33.861 "uuid": "c2ef23ca-2cf7-4866-9e37-0e9ee282c707", 00:04:33.861 "assigned_rate_limits": { 00:04:33.861 "rw_ios_per_sec": 0, 00:04:33.861 "rw_mbytes_per_sec": 0, 00:04:33.861 "r_mbytes_per_sec": 0, 00:04:33.861 "w_mbytes_per_sec": 0 00:04:33.861 }, 00:04:33.861 "claimed": true, 00:04:33.861 "claim_type": "exclusive_write", 00:04:33.861 "zoned": false, 00:04:33.861 "supported_io_types": { 00:04:33.861 "read": true, 00:04:33.861 "write": true, 00:04:33.861 "unmap": true, 00:04:33.861 "flush": true, 00:04:33.861 "reset": true, 00:04:33.861 "nvme_admin": false, 00:04:33.861 "nvme_io": false, 00:04:33.861 "nvme_io_md": false, 00:04:33.861 "write_zeroes": true, 00:04:33.861 "zcopy": true, 00:04:33.861 "get_zone_info": false, 00:04:33.861 "zone_management": false, 00:04:33.861 "zone_append": false, 00:04:33.861 "compare": false, 00:04:33.861 "compare_and_write": false, 00:04:33.861 "abort": true, 00:04:33.861 "seek_hole": false, 00:04:33.861 "seek_data": false, 00:04:33.861 "copy": true, 00:04:33.861 "nvme_iov_md": false 00:04:33.861 }, 00:04:33.861 "memory_domains": [ 00:04:33.861 { 00:04:33.861 "dma_device_id": "system", 00:04:33.861 "dma_device_type": 1 00:04:33.861 }, 00:04:33.861 { 00:04:33.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.861 "dma_device_type": 2 00:04:33.861 } 00:04:33.861 ], 00:04:33.861 "driver_specific": {} 00:04:33.861 }, 00:04:33.861 { 00:04:33.861 "name": "Passthru0", 00:04:33.861 "aliases": [ 00:04:33.861 "12aa2a7f-6fa1-5c20-913a-bbf3342bdf90" 00:04:33.861 ], 00:04:33.861 "product_name": "passthru", 00:04:33.861 "block_size": 512, 00:04:33.861 "num_blocks": 16384, 00:04:33.861 "uuid": "12aa2a7f-6fa1-5c20-913a-bbf3342bdf90", 00:04:33.861 "assigned_rate_limits": { 00:04:33.861 "rw_ios_per_sec": 0, 00:04:33.861 "rw_mbytes_per_sec": 0, 00:04:33.861 "r_mbytes_per_sec": 0, 00:04:33.861 "w_mbytes_per_sec": 0 00:04:33.861 }, 00:04:33.861 "claimed": false, 00:04:33.861 "zoned": false, 00:04:33.861 "supported_io_types": { 00:04:33.861 "read": true, 00:04:33.861 "write": true, 00:04:33.861 "unmap": true, 00:04:33.861 "flush": true, 00:04:33.861 "reset": true, 00:04:33.861 "nvme_admin": false, 00:04:33.861 "nvme_io": false, 00:04:33.861 "nvme_io_md": false, 00:04:33.861 "write_zeroes": true, 00:04:33.861 "zcopy": true, 00:04:33.861 "get_zone_info": false, 00:04:33.861 "zone_management": false, 00:04:33.861 "zone_append": false, 00:04:33.861 "compare": false, 00:04:33.861 "compare_and_write": false, 00:04:33.861 "abort": true, 00:04:33.861 "seek_hole": false, 00:04:33.861 "seek_data": false, 00:04:33.861 "copy": true, 00:04:33.861 "nvme_iov_md": false 00:04:33.861 }, 00:04:33.861 "memory_domains": [ 00:04:33.861 { 00:04:33.861 "dma_device_id": "system", 00:04:33.861 "dma_device_type": 1 00:04:33.861 }, 00:04:33.861 { 00:04:33.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.861 "dma_device_type": 2 00:04:33.861 } 00:04:33.861 ], 00:04:33.861 "driver_specific": { 00:04:33.861 "passthru": { 00:04:33.861 "name": "Passthru0", 00:04:33.861 "base_bdev_name": "Malloc0" 00:04:33.861 } 00:04:33.861 } 00:04:33.861 } 00:04:33.861 ]' 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.861 09:58:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.861 09:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.861 09:58:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.861 09:58:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.861 09:58:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.861 09:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.861 09:58:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.861 09:58:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.861 09:58:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.121 09:58:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.121 00:04:34.121 real 0m0.350s 00:04:34.121 user 0m0.220s 00:04:34.121 sys 0m0.033s 00:04:34.121 ************************************ 00:04:34.121 END TEST rpc_integrity 00:04:34.121 ************************************ 00:04:34.121 09:58:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.121 09:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 09:58:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.121 09:58:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.121 09:58:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.121 09:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 ************************************ 00:04:34.121 START TEST rpc_plugins 00:04:34.121 ************************************ 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.121 { 00:04:34.121 "name": "Malloc1", 00:04:34.121 "aliases": [ 00:04:34.121 "d716865e-9353-4926-9cd7-180d53f91658" 00:04:34.121 ], 00:04:34.121 "product_name": "Malloc disk", 00:04:34.121 "block_size": 4096, 00:04:34.121 "num_blocks": 256, 00:04:34.121 "uuid": "d716865e-9353-4926-9cd7-180d53f91658", 00:04:34.121 "assigned_rate_limits": { 00:04:34.121 "rw_ios_per_sec": 0, 00:04:34.121 "rw_mbytes_per_sec": 0, 00:04:34.121 "r_mbytes_per_sec": 0, 00:04:34.121 "w_mbytes_per_sec": 0 00:04:34.121 }, 00:04:34.121 "claimed": false, 00:04:34.121 "zoned": false, 00:04:34.121 "supported_io_types": { 00:04:34.121 "read": true, 00:04:34.121 "write": true, 00:04:34.121 "unmap": true, 00:04:34.121 "flush": true, 00:04:34.121 "reset": true, 00:04:34.121 "nvme_admin": false, 00:04:34.121 "nvme_io": false, 00:04:34.121 "nvme_io_md": false, 00:04:34.121 "write_zeroes": true, 00:04:34.121 "zcopy": true, 00:04:34.121 "get_zone_info": false, 00:04:34.121 "zone_management": false, 00:04:34.121 "zone_append": false, 00:04:34.121 "compare": false, 00:04:34.121 "compare_and_write": false, 00:04:34.121 "abort": true, 00:04:34.121 "seek_hole": false, 00:04:34.121 "seek_data": false, 00:04:34.121 "copy": true, 00:04:34.121 "nvme_iov_md": false 00:04:34.121 }, 00:04:34.121 "memory_domains": [ 00:04:34.121 { 00:04:34.121 "dma_device_id": "system", 00:04:34.121 "dma_device_type": 1 00:04:34.121 }, 00:04:34.121 { 00:04:34.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.121 "dma_device_type": 2 00:04:34.121 } 00:04:34.121 ], 00:04:34.121 "driver_specific": {} 00:04:34.121 } 00:04:34.121 ]' 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.121 09:58:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.121 00:04:34.121 real 0m0.173s 00:04:34.121 user 0m0.115s 00:04:34.121 sys 0m0.018s 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.121 09:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 ************************************ 00:04:34.121 END TEST rpc_plugins 00:04:34.121 ************************************ 00:04:34.381 09:58:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.381 09:58:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.381 09:58:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.381 09:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.381 ************************************ 00:04:34.381 START TEST rpc_trace_cmd_test 00:04:34.381 ************************************ 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.381 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56698", 00:04:34.381 "tpoint_group_mask": "0x8", 00:04:34.381 "iscsi_conn": { 00:04:34.381 "mask": "0x2", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "scsi": { 00:04:34.381 "mask": "0x4", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "bdev": { 00:04:34.381 "mask": "0x8", 00:04:34.381 "tpoint_mask": "0xffffffffffffffff" 00:04:34.381 }, 00:04:34.381 "nvmf_rdma": { 00:04:34.381 "mask": "0x10", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "nvmf_tcp": { 00:04:34.381 "mask": "0x20", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "ftl": { 00:04:34.381 "mask": "0x40", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "blobfs": { 00:04:34.381 "mask": "0x80", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "dsa": { 00:04:34.381 "mask": "0x200", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "thread": { 00:04:34.381 "mask": "0x400", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "nvme_pcie": { 00:04:34.381 "mask": "0x800", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "iaa": { 00:04:34.381 "mask": "0x1000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "nvme_tcp": { 00:04:34.381 "mask": "0x2000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "bdev_nvme": { 00:04:34.381 "mask": "0x4000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "sock": { 00:04:34.381 "mask": "0x8000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "blob": { 00:04:34.381 "mask": "0x10000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "bdev_raid": { 00:04:34.381 "mask": "0x20000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 }, 00:04:34.381 "scheduler": { 00:04:34.381 "mask": "0x40000", 00:04:34.381 "tpoint_mask": "0x0" 00:04:34.381 } 00:04:34.381 }' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.381 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.640 09:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.640 00:04:34.640 real 0m0.288s 00:04:34.640 user 0m0.245s 00:04:34.640 sys 0m0.035s 00:04:34.640 ************************************ 00:04:34.640 END TEST rpc_trace_cmd_test 00:04:34.640 ************************************ 00:04:34.640 09:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.640 09:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 09:58:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.640 09:58:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.640 09:58:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.640 09:58:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.640 09:58:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.640 09:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 ************************************ 00:04:34.640 START TEST rpc_daemon_integrity 00:04:34.640 ************************************ 00:04:34.640 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.640 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.640 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.640 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.640 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.641 { 00:04:34.641 "name": "Malloc2", 00:04:34.641 "aliases": [ 00:04:34.641 "29fb2566-38d7-4c18-822c-d350e2dab5ee" 00:04:34.641 ], 00:04:34.641 "product_name": "Malloc disk", 00:04:34.641 "block_size": 512, 00:04:34.641 "num_blocks": 16384, 00:04:34.641 "uuid": "29fb2566-38d7-4c18-822c-d350e2dab5ee", 00:04:34.641 "assigned_rate_limits": { 00:04:34.641 "rw_ios_per_sec": 0, 00:04:34.641 "rw_mbytes_per_sec": 0, 00:04:34.641 "r_mbytes_per_sec": 0, 00:04:34.641 "w_mbytes_per_sec": 0 00:04:34.641 }, 00:04:34.641 "claimed": false, 00:04:34.641 "zoned": false, 00:04:34.641 "supported_io_types": { 00:04:34.641 "read": true, 00:04:34.641 "write": true, 00:04:34.641 "unmap": true, 00:04:34.641 "flush": true, 00:04:34.641 "reset": true, 00:04:34.641 "nvme_admin": false, 00:04:34.641 "nvme_io": false, 00:04:34.641 "nvme_io_md": false, 00:04:34.641 "write_zeroes": true, 00:04:34.641 "zcopy": true, 00:04:34.641 "get_zone_info": false, 00:04:34.641 "zone_management": false, 00:04:34.641 "zone_append": false, 00:04:34.641 "compare": false, 00:04:34.641 "compare_and_write": false, 00:04:34.641 "abort": true, 00:04:34.641 "seek_hole": false, 00:04:34.641 "seek_data": false, 00:04:34.641 "copy": true, 00:04:34.641 "nvme_iov_md": false 00:04:34.641 }, 00:04:34.641 "memory_domains": [ 00:04:34.641 { 00:04:34.641 "dma_device_id": "system", 00:04:34.641 "dma_device_type": 1 00:04:34.641 }, 00:04:34.641 { 00:04:34.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.641 "dma_device_type": 2 00:04:34.641 } 00:04:34.641 ], 00:04:34.641 "driver_specific": {} 00:04:34.641 } 00:04:34.641 ]' 00:04:34.641 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 [2024-11-19 09:58:48.888844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.901 [2024-11-19 09:58:48.888921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.901 [2024-11-19 09:58:48.888956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:34.901 [2024-11-19 09:58:48.888975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.901 [2024-11-19 09:58:48.892254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.901 [2024-11-19 09:58:48.892314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.901 Passthru0 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.901 { 00:04:34.901 "name": "Malloc2", 00:04:34.901 "aliases": [ 00:04:34.901 "29fb2566-38d7-4c18-822c-d350e2dab5ee" 00:04:34.901 ], 00:04:34.901 "product_name": "Malloc disk", 00:04:34.901 "block_size": 512, 00:04:34.901 "num_blocks": 16384, 00:04:34.901 "uuid": "29fb2566-38d7-4c18-822c-d350e2dab5ee", 00:04:34.901 "assigned_rate_limits": { 00:04:34.901 "rw_ios_per_sec": 0, 00:04:34.901 "rw_mbytes_per_sec": 0, 00:04:34.901 "r_mbytes_per_sec": 0, 00:04:34.901 "w_mbytes_per_sec": 0 00:04:34.901 }, 00:04:34.901 "claimed": true, 00:04:34.901 "claim_type": "exclusive_write", 00:04:34.901 "zoned": false, 00:04:34.901 "supported_io_types": { 00:04:34.901 "read": true, 00:04:34.901 "write": true, 00:04:34.901 "unmap": true, 00:04:34.901 "flush": true, 00:04:34.901 "reset": true, 00:04:34.901 "nvme_admin": false, 00:04:34.901 "nvme_io": false, 00:04:34.901 "nvme_io_md": false, 00:04:34.901 "write_zeroes": true, 00:04:34.901 "zcopy": true, 00:04:34.901 "get_zone_info": false, 00:04:34.901 "zone_management": false, 00:04:34.901 "zone_append": false, 00:04:34.901 "compare": false, 00:04:34.901 "compare_and_write": false, 00:04:34.901 "abort": true, 00:04:34.901 "seek_hole": false, 00:04:34.901 "seek_data": false, 00:04:34.901 "copy": true, 00:04:34.901 "nvme_iov_md": false 00:04:34.901 }, 00:04:34.901 "memory_domains": [ 00:04:34.901 { 00:04:34.901 "dma_device_id": "system", 00:04:34.901 "dma_device_type": 1 00:04:34.901 }, 00:04:34.901 { 00:04:34.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.901 "dma_device_type": 2 00:04:34.901 } 00:04:34.901 ], 00:04:34.901 "driver_specific": {} 00:04:34.901 }, 00:04:34.901 { 00:04:34.901 "name": "Passthru0", 00:04:34.901 "aliases": [ 00:04:34.901 "a88bb6c7-1c41-55bf-a3ef-1962e6b4ffac" 00:04:34.901 ], 00:04:34.901 "product_name": "passthru", 00:04:34.901 "block_size": 512, 00:04:34.901 "num_blocks": 16384, 00:04:34.901 "uuid": "a88bb6c7-1c41-55bf-a3ef-1962e6b4ffac", 00:04:34.901 "assigned_rate_limits": { 00:04:34.901 "rw_ios_per_sec": 0, 00:04:34.901 "rw_mbytes_per_sec": 0, 00:04:34.901 "r_mbytes_per_sec": 0, 00:04:34.901 "w_mbytes_per_sec": 0 00:04:34.901 }, 00:04:34.901 "claimed": false, 00:04:34.901 "zoned": false, 00:04:34.901 "supported_io_types": { 00:04:34.901 "read": true, 00:04:34.901 "write": true, 00:04:34.901 "unmap": true, 00:04:34.901 "flush": true, 00:04:34.901 "reset": true, 00:04:34.901 "nvme_admin": false, 00:04:34.901 "nvme_io": false, 00:04:34.901 "nvme_io_md": false, 00:04:34.901 "write_zeroes": true, 00:04:34.901 "zcopy": true, 00:04:34.901 "get_zone_info": false, 00:04:34.901 "zone_management": false, 00:04:34.901 "zone_append": false, 00:04:34.901 "compare": false, 00:04:34.901 "compare_and_write": false, 00:04:34.901 "abort": true, 00:04:34.901 "seek_hole": false, 00:04:34.901 "seek_data": false, 00:04:34.901 "copy": true, 00:04:34.901 "nvme_iov_md": false 00:04:34.901 }, 00:04:34.901 "memory_domains": [ 00:04:34.901 { 00:04:34.901 "dma_device_id": "system", 00:04:34.901 "dma_device_type": 1 00:04:34.901 }, 00:04:34.901 { 00:04:34.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.901 "dma_device_type": 2 00:04:34.901 } 00:04:34.901 ], 00:04:34.901 "driver_specific": { 00:04:34.901 "passthru": { 00:04:34.901 "name": "Passthru0", 00:04:34.901 "base_bdev_name": "Malloc2" 00:04:34.901 } 00:04:34.901 } 00:04:34.901 } 00:04:34.901 ]' 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.901 09:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.901 00:04:34.901 real 0m0.360s 00:04:34.901 user 0m0.228s 00:04:34.901 sys 0m0.038s 00:04:34.901 ************************************ 00:04:34.901 END TEST rpc_daemon_integrity 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.901 09:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 ************************************ 00:04:34.901 09:58:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.901 09:58:49 rpc -- rpc/rpc.sh@84 -- # killprocess 56698 00:04:34.901 09:58:49 rpc -- common/autotest_common.sh@954 -- # '[' -z 56698 ']' 00:04:34.901 09:58:49 rpc -- common/autotest_common.sh@958 -- # kill -0 56698 00:04:34.901 09:58:49 rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.901 09:58:49 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.901 09:58:49 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56698 00:04:35.164 09:58:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.164 killing process with pid 56698 00:04:35.164 09:58:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.164 09:58:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56698' 00:04:35.164 09:58:49 rpc -- common/autotest_common.sh@973 -- # kill 56698 00:04:35.164 09:58:49 rpc -- common/autotest_common.sh@978 -- # wait 56698 00:04:37.699 00:04:37.699 real 0m5.188s 00:04:37.699 user 0m5.822s 00:04:37.699 sys 0m1.026s 00:04:37.699 09:58:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.699 ************************************ 00:04:37.699 END TEST rpc 00:04:37.699 ************************************ 00:04:37.699 09:58:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.699 09:58:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:37.699 09:58:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.699 09:58:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.699 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.700 ************************************ 00:04:37.700 START TEST skip_rpc 00:04:37.700 ************************************ 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:37.700 * Looking for test storage... 00:04:37.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.700 09:58:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.700 --rc genhtml_branch_coverage=1 00:04:37.700 --rc genhtml_function_coverage=1 00:04:37.700 --rc genhtml_legend=1 00:04:37.700 --rc geninfo_all_blocks=1 00:04:37.700 --rc geninfo_unexecuted_blocks=1 00:04:37.700 00:04:37.700 ' 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.700 --rc genhtml_branch_coverage=1 00:04:37.700 --rc genhtml_function_coverage=1 00:04:37.700 --rc genhtml_legend=1 00:04:37.700 --rc geninfo_all_blocks=1 00:04:37.700 --rc geninfo_unexecuted_blocks=1 00:04:37.700 00:04:37.700 ' 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.700 --rc genhtml_branch_coverage=1 00:04:37.700 --rc genhtml_function_coverage=1 00:04:37.700 --rc genhtml_legend=1 00:04:37.700 --rc geninfo_all_blocks=1 00:04:37.700 --rc geninfo_unexecuted_blocks=1 00:04:37.700 00:04:37.700 ' 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.700 --rc genhtml_branch_coverage=1 00:04:37.700 --rc genhtml_function_coverage=1 00:04:37.700 --rc genhtml_legend=1 00:04:37.700 --rc geninfo_all_blocks=1 00:04:37.700 --rc geninfo_unexecuted_blocks=1 00:04:37.700 00:04:37.700 ' 00:04:37.700 09:58:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.700 09:58:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.700 09:58:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.700 09:58:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.700 ************************************ 00:04:37.700 START TEST skip_rpc 00:04:37.700 ************************************ 00:04:37.700 09:58:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:37.700 09:58:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56927 00:04:37.700 09:58:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.700 09:58:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.700 09:58:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.700 [2024-11-19 09:58:51.720928] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:04:37.700 [2024-11-19 09:58:51.721157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56927 ] 00:04:37.700 [2024-11-19 09:58:51.901113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.959 [2024-11-19 09:58:52.031484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56927 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56927 ']' 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56927 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56927 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.233 killing process with pid 56927 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56927' 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56927 00:04:43.233 09:58:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56927 00:04:45.139 00:04:45.139 real 0m7.360s 00:04:45.139 user 0m6.730s 00:04:45.139 sys 0m0.529s 00:04:45.139 09:58:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.140 09:58:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.140 ************************************ 00:04:45.140 END TEST skip_rpc 00:04:45.140 ************************************ 00:04:45.140 09:58:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:45.140 09:58:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.140 09:58:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.140 09:58:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.140 ************************************ 00:04:45.140 START TEST skip_rpc_with_json 00:04:45.140 ************************************ 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57036 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57036 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57036 ']' 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.140 09:58:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.140 [2024-11-19 09:58:59.154310] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:04:45.140 [2024-11-19 09:58:59.154531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57036 ] 00:04:45.140 [2024-11-19 09:58:59.342995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.399 [2024-11-19 09:58:59.493139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.350 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.351 [2024-11-19 09:59:00.447230] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:46.351 request: 00:04:46.351 { 00:04:46.351 "trtype": "tcp", 00:04:46.351 "method": "nvmf_get_transports", 00:04:46.351 "req_id": 1 00:04:46.351 } 00:04:46.351 Got JSON-RPC error response 00:04:46.351 response: 00:04:46.351 { 00:04:46.351 "code": -19, 00:04:46.351 "message": "No such device" 00:04:46.351 } 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.351 [2024-11-19 09:59:00.459432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.351 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.654 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.655 09:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.655 { 00:04:46.655 "subsystems": [ 00:04:46.655 { 00:04:46.655 "subsystem": "fsdev", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "fsdev_set_opts", 00:04:46.655 "params": { 00:04:46.655 "fsdev_io_pool_size": 65535, 00:04:46.655 "fsdev_io_cache_size": 256 00:04:46.655 } 00:04:46.655 } 00:04:46.655 ] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "keyring", 00:04:46.655 "config": [] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "iobuf", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "iobuf_set_options", 00:04:46.655 "params": { 00:04:46.655 "small_pool_count": 8192, 00:04:46.655 "large_pool_count": 1024, 00:04:46.655 "small_bufsize": 8192, 00:04:46.655 "large_bufsize": 135168, 00:04:46.655 "enable_numa": false 00:04:46.655 } 00:04:46.655 } 00:04:46.655 ] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "sock", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "sock_set_default_impl", 00:04:46.655 "params": { 00:04:46.655 "impl_name": "posix" 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "sock_impl_set_options", 00:04:46.655 "params": { 00:04:46.655 "impl_name": "ssl", 00:04:46.655 "recv_buf_size": 4096, 00:04:46.655 "send_buf_size": 4096, 00:04:46.655 "enable_recv_pipe": true, 00:04:46.655 "enable_quickack": false, 00:04:46.655 "enable_placement_id": 0, 00:04:46.655 "enable_zerocopy_send_server": true, 00:04:46.655 "enable_zerocopy_send_client": false, 00:04:46.655 "zerocopy_threshold": 0, 00:04:46.655 "tls_version": 0, 00:04:46.655 "enable_ktls": false 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "sock_impl_set_options", 00:04:46.655 "params": { 00:04:46.655 "impl_name": "posix", 00:04:46.655 "recv_buf_size": 2097152, 00:04:46.655 "send_buf_size": 2097152, 00:04:46.655 "enable_recv_pipe": true, 00:04:46.655 "enable_quickack": false, 00:04:46.655 "enable_placement_id": 0, 00:04:46.655 "enable_zerocopy_send_server": true, 00:04:46.655 "enable_zerocopy_send_client": false, 00:04:46.655 "zerocopy_threshold": 0, 00:04:46.655 "tls_version": 0, 00:04:46.655 "enable_ktls": false 00:04:46.655 } 00:04:46.655 } 00:04:46.655 ] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "vmd", 00:04:46.655 "config": [] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "accel", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "accel_set_options", 00:04:46.655 "params": { 00:04:46.655 "small_cache_size": 128, 00:04:46.655 "large_cache_size": 16, 00:04:46.655 "task_count": 2048, 00:04:46.655 "sequence_count": 2048, 00:04:46.655 "buf_count": 2048 00:04:46.655 } 00:04:46.655 } 00:04:46.655 ] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "bdev", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "bdev_set_options", 00:04:46.655 "params": { 00:04:46.655 "bdev_io_pool_size": 65535, 00:04:46.655 "bdev_io_cache_size": 256, 00:04:46.655 "bdev_auto_examine": true, 00:04:46.655 "iobuf_small_cache_size": 128, 00:04:46.655 "iobuf_large_cache_size": 16 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "bdev_raid_set_options", 00:04:46.655 "params": { 00:04:46.655 "process_window_size_kb": 1024, 00:04:46.655 "process_max_bandwidth_mb_sec": 0 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "bdev_iscsi_set_options", 00:04:46.655 "params": { 00:04:46.655 "timeout_sec": 30 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "bdev_nvme_set_options", 00:04:46.655 "params": { 00:04:46.655 "action_on_timeout": "none", 00:04:46.655 "timeout_us": 0, 00:04:46.655 "timeout_admin_us": 0, 00:04:46.655 "keep_alive_timeout_ms": 10000, 00:04:46.655 "arbitration_burst": 0, 00:04:46.655 "low_priority_weight": 0, 00:04:46.655 "medium_priority_weight": 0, 00:04:46.655 "high_priority_weight": 0, 00:04:46.655 "nvme_adminq_poll_period_us": 10000, 00:04:46.655 "nvme_ioq_poll_period_us": 0, 00:04:46.655 "io_queue_requests": 0, 00:04:46.655 "delay_cmd_submit": true, 00:04:46.655 "transport_retry_count": 4, 00:04:46.655 "bdev_retry_count": 3, 00:04:46.655 "transport_ack_timeout": 0, 00:04:46.655 "ctrlr_loss_timeout_sec": 0, 00:04:46.655 "reconnect_delay_sec": 0, 00:04:46.655 "fast_io_fail_timeout_sec": 0, 00:04:46.655 "disable_auto_failback": false, 00:04:46.655 "generate_uuids": false, 00:04:46.655 "transport_tos": 0, 00:04:46.655 "nvme_error_stat": false, 00:04:46.655 "rdma_srq_size": 0, 00:04:46.655 "io_path_stat": false, 00:04:46.655 "allow_accel_sequence": false, 00:04:46.655 "rdma_max_cq_size": 0, 00:04:46.655 "rdma_cm_event_timeout_ms": 0, 00:04:46.655 "dhchap_digests": [ 00:04:46.655 "sha256", 00:04:46.655 "sha384", 00:04:46.655 "sha512" 00:04:46.655 ], 00:04:46.655 "dhchap_dhgroups": [ 00:04:46.655 "null", 00:04:46.655 "ffdhe2048", 00:04:46.655 "ffdhe3072", 00:04:46.655 "ffdhe4096", 00:04:46.655 "ffdhe6144", 00:04:46.655 "ffdhe8192" 00:04:46.655 ] 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "bdev_nvme_set_hotplug", 00:04:46.655 "params": { 00:04:46.655 "period_us": 100000, 00:04:46.655 "enable": false 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "bdev_wait_for_examine" 00:04:46.655 } 00:04:46.655 ] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "scsi", 00:04:46.655 "config": null 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "scheduler", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "framework_set_scheduler", 00:04:46.655 "params": { 00:04:46.655 "name": "static" 00:04:46.655 } 00:04:46.655 } 00:04:46.655 ] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "vhost_scsi", 00:04:46.655 "config": [] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "vhost_blk", 00:04:46.655 "config": [] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "ublk", 00:04:46.655 "config": [] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "nbd", 00:04:46.655 "config": [] 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "subsystem": "nvmf", 00:04:46.655 "config": [ 00:04:46.655 { 00:04:46.655 "method": "nvmf_set_config", 00:04:46.655 "params": { 00:04:46.655 "discovery_filter": "match_any", 00:04:46.655 "admin_cmd_passthru": { 00:04:46.655 "identify_ctrlr": false 00:04:46.655 }, 00:04:46.655 "dhchap_digests": [ 00:04:46.655 "sha256", 00:04:46.655 "sha384", 00:04:46.655 "sha512" 00:04:46.655 ], 00:04:46.655 "dhchap_dhgroups": [ 00:04:46.655 "null", 00:04:46.655 "ffdhe2048", 00:04:46.655 "ffdhe3072", 00:04:46.655 "ffdhe4096", 00:04:46.655 "ffdhe6144", 00:04:46.655 "ffdhe8192" 00:04:46.655 ] 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "nvmf_set_max_subsystems", 00:04:46.655 "params": { 00:04:46.655 "max_subsystems": 1024 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "nvmf_set_crdt", 00:04:46.655 "params": { 00:04:46.655 "crdt1": 0, 00:04:46.655 "crdt2": 0, 00:04:46.655 "crdt3": 0 00:04:46.655 } 00:04:46.655 }, 00:04:46.655 { 00:04:46.655 "method": "nvmf_create_transport", 00:04:46.655 "params": { 00:04:46.655 "trtype": "TCP", 00:04:46.655 "max_queue_depth": 128, 00:04:46.655 "max_io_qpairs_per_ctrlr": 127, 00:04:46.655 "in_capsule_data_size": 4096, 00:04:46.655 "max_io_size": 131072, 00:04:46.655 "io_unit_size": 131072, 00:04:46.655 "max_aq_depth": 128, 00:04:46.655 "num_shared_buffers": 511, 00:04:46.655 "buf_cache_size": 4294967295, 00:04:46.655 "dif_insert_or_strip": false, 00:04:46.655 "zcopy": false, 00:04:46.655 "c2h_success": true, 00:04:46.655 "sock_priority": 0, 00:04:46.655 "abort_timeout_sec": 1, 00:04:46.655 "ack_timeout": 0, 00:04:46.655 "data_wr_pool_size": 0 00:04:46.655 } 00:04:46.656 } 00:04:46.656 ] 00:04:46.656 }, 00:04:46.656 { 00:04:46.656 "subsystem": "iscsi", 00:04:46.656 "config": [ 00:04:46.656 { 00:04:46.656 "method": "iscsi_set_options", 00:04:46.656 "params": { 00:04:46.656 "node_base": "iqn.2016-06.io.spdk", 00:04:46.656 "max_sessions": 128, 00:04:46.656 "max_connections_per_session": 2, 00:04:46.656 "max_queue_depth": 64, 00:04:46.656 "default_time2wait": 2, 00:04:46.656 "default_time2retain": 20, 00:04:46.656 "first_burst_length": 8192, 00:04:46.656 "immediate_data": true, 00:04:46.656 "allow_duplicated_isid": false, 00:04:46.656 "error_recovery_level": 0, 00:04:46.656 "nop_timeout": 60, 00:04:46.656 "nop_in_interval": 30, 00:04:46.656 "disable_chap": false, 00:04:46.656 "require_chap": false, 00:04:46.656 "mutual_chap": false, 00:04:46.656 "chap_group": 0, 00:04:46.656 "max_large_datain_per_connection": 64, 00:04:46.656 "max_r2t_per_connection": 4, 00:04:46.656 "pdu_pool_size": 36864, 00:04:46.656 "immediate_data_pool_size": 16384, 00:04:46.656 "data_out_pool_size": 2048 00:04:46.656 } 00:04:46.656 } 00:04:46.656 ] 00:04:46.656 } 00:04:46.656 ] 00:04:46.656 } 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57036 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57036 ']' 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57036 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57036 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.656 killing process with pid 57036 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57036' 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57036 00:04:46.656 09:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57036 00:04:49.209 09:59:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57087 00:04:49.209 09:59:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.209 09:59:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57087 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57087 ']' 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57087 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57087 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.481 killing process with pid 57087 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.481 09:59:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57087' 00:04:54.481 09:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57087 00:04:54.481 09:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57087 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:56.385 00:04:56.385 real 0m11.280s 00:04:56.385 user 0m10.481s 00:04:56.385 sys 0m1.236s 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 ************************************ 00:04:56.385 END TEST skip_rpc_with_json 00:04:56.385 ************************************ 00:04:56.385 09:59:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:56.385 09:59:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.385 09:59:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.385 09:59:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 ************************************ 00:04:56.385 START TEST skip_rpc_with_delay 00:04:56.385 ************************************ 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.385 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.386 [2024-11-19 09:59:10.475719] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:56.386 ************************************ 00:04:56.386 END TEST skip_rpc_with_delay 00:04:56.386 ************************************ 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.386 00:04:56.386 real 0m0.205s 00:04:56.386 user 0m0.111s 00:04:56.386 sys 0m0.091s 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.386 09:59:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:56.386 09:59:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:56.386 09:59:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:56.386 09:59:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:56.386 09:59:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.386 09:59:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.386 09:59:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.386 ************************************ 00:04:56.386 START TEST exit_on_failed_rpc_init 00:04:56.386 ************************************ 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57221 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57221 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57221 ']' 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.386 09:59:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.645 [2024-11-19 09:59:10.716949] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:04:56.645 [2024-11-19 09:59:10.717151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57221 ] 00:04:56.903 [2024-11-19 09:59:10.895067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.903 [2024-11-19 09:59:11.044473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.839 09:59:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.098 [2024-11-19 09:59:12.149859] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:04:58.098 [2024-11-19 09:59:12.150064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57241 ] 00:04:58.356 [2024-11-19 09:59:12.336377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.357 [2024-11-19 09:59:12.505576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.357 [2024-11-19 09:59:12.505749] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.357 [2024-11-19 09:59:12.505775] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.357 [2024-11-19 09:59:12.505818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57221 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57221 ']' 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57221 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57221 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.623 killing process with pid 57221 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57221' 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57221 00:04:58.623 09:59:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57221 00:05:01.172 00:05:01.172 real 0m4.431s 00:05:01.172 user 0m4.865s 00:05:01.172 sys 0m0.819s 00:05:01.172 09:59:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.172 09:59:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.172 ************************************ 00:05:01.172 END TEST exit_on_failed_rpc_init 00:05:01.172 ************************************ 00:05:01.172 09:59:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.172 00:05:01.172 real 0m23.686s 00:05:01.173 user 0m22.388s 00:05:01.173 sys 0m2.876s 00:05:01.173 09:59:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.173 09:59:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.173 ************************************ 00:05:01.173 END TEST skip_rpc 00:05:01.173 ************************************ 00:05:01.173 09:59:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.173 09:59:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.173 09:59:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.173 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.173 ************************************ 00:05:01.173 START TEST rpc_client 00:05:01.173 ************************************ 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.173 * Looking for test storage... 00:05:01.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.173 09:59:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.173 --rc genhtml_branch_coverage=1 00:05:01.173 --rc genhtml_function_coverage=1 00:05:01.173 --rc genhtml_legend=1 00:05:01.173 --rc geninfo_all_blocks=1 00:05:01.173 --rc geninfo_unexecuted_blocks=1 00:05:01.173 00:05:01.173 ' 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.173 --rc genhtml_branch_coverage=1 00:05:01.173 --rc genhtml_function_coverage=1 00:05:01.173 --rc genhtml_legend=1 00:05:01.173 --rc geninfo_all_blocks=1 00:05:01.173 --rc geninfo_unexecuted_blocks=1 00:05:01.173 00:05:01.173 ' 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.173 --rc genhtml_branch_coverage=1 00:05:01.173 --rc genhtml_function_coverage=1 00:05:01.173 --rc genhtml_legend=1 00:05:01.173 --rc geninfo_all_blocks=1 00:05:01.173 --rc geninfo_unexecuted_blocks=1 00:05:01.173 00:05:01.173 ' 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.173 --rc genhtml_branch_coverage=1 00:05:01.173 --rc genhtml_function_coverage=1 00:05:01.173 --rc genhtml_legend=1 00:05:01.173 --rc geninfo_all_blocks=1 00:05:01.173 --rc geninfo_unexecuted_blocks=1 00:05:01.173 00:05:01.173 ' 00:05:01.173 09:59:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:01.173 OK 00:05:01.173 09:59:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.173 00:05:01.173 real 0m0.230s 00:05:01.173 user 0m0.136s 00:05:01.173 sys 0m0.107s 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.173 ************************************ 00:05:01.173 END TEST rpc_client 00:05:01.173 ************************************ 00:05:01.173 09:59:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.173 09:59:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.173 09:59:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.173 09:59:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.173 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.173 ************************************ 00:05:01.173 START TEST json_config 00:05:01.173 ************************************ 00:05:01.173 09:59:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.433 09:59:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.433 09:59:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.433 09:59:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.433 09:59:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.433 09:59:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.433 09:59:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:01.433 09:59:15 json_config -- scripts/common.sh@345 -- # : 1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.433 09:59:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.433 09:59:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@353 -- # local d=1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.433 09:59:15 json_config -- scripts/common.sh@355 -- # echo 1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.433 09:59:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@353 -- # local d=2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.433 09:59:15 json_config -- scripts/common.sh@355 -- # echo 2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.433 09:59:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.433 09:59:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.433 09:59:15 json_config -- scripts/common.sh@368 -- # return 0 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.433 --rc genhtml_branch_coverage=1 00:05:01.433 --rc genhtml_function_coverage=1 00:05:01.433 --rc genhtml_legend=1 00:05:01.433 --rc geninfo_all_blocks=1 00:05:01.433 --rc geninfo_unexecuted_blocks=1 00:05:01.433 00:05:01.433 ' 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.433 --rc genhtml_branch_coverage=1 00:05:01.433 --rc genhtml_function_coverage=1 00:05:01.433 --rc genhtml_legend=1 00:05:01.433 --rc geninfo_all_blocks=1 00:05:01.433 --rc geninfo_unexecuted_blocks=1 00:05:01.433 00:05:01.433 ' 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.433 --rc genhtml_branch_coverage=1 00:05:01.433 --rc genhtml_function_coverage=1 00:05:01.433 --rc genhtml_legend=1 00:05:01.433 --rc geninfo_all_blocks=1 00:05:01.433 --rc geninfo_unexecuted_blocks=1 00:05:01.433 00:05:01.433 ' 00:05:01.433 09:59:15 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.433 --rc genhtml_branch_coverage=1 00:05:01.433 --rc genhtml_function_coverage=1 00:05:01.433 --rc genhtml_legend=1 00:05:01.433 --rc geninfo_all_blocks=1 00:05:01.433 --rc geninfo_unexecuted_blocks=1 00:05:01.433 00:05:01.433 ' 00:05:01.433 09:59:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c78e9af8-b39e-4b71-8f40-2b37c338158f 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c78e9af8-b39e-4b71-8f40-2b37c338158f 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.434 09:59:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.434 09:59:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.434 09:59:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.434 09:59:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.434 09:59:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.434 09:59:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.434 09:59:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.434 09:59:15 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.434 09:59:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@51 -- # : 0 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.434 09:59:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:01.434 WARNING: No tests are enabled so not running JSON configuration tests 00:05:01.434 09:59:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:01.434 00:05:01.434 real 0m0.163s 00:05:01.434 user 0m0.101s 00:05:01.434 sys 0m0.066s 00:05:01.434 09:59:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.434 ************************************ 00:05:01.434 END TEST json_config 00:05:01.434 ************************************ 00:05:01.434 09:59:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.434 09:59:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:01.434 09:59:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.434 09:59:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.434 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.434 ************************************ 00:05:01.434 START TEST json_config_extra_key 00:05:01.434 ************************************ 00:05:01.434 09:59:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:01.694 09:59:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.694 09:59:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.694 09:59:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.694 09:59:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.694 09:59:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.694 09:59:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.694 09:59:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:01.695 09:59:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.695 09:59:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.695 --rc genhtml_branch_coverage=1 00:05:01.695 --rc genhtml_function_coverage=1 00:05:01.695 --rc genhtml_legend=1 00:05:01.695 --rc geninfo_all_blocks=1 00:05:01.695 --rc geninfo_unexecuted_blocks=1 00:05:01.695 00:05:01.695 ' 00:05:01.695 09:59:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.695 --rc genhtml_branch_coverage=1 00:05:01.695 --rc genhtml_function_coverage=1 00:05:01.695 --rc genhtml_legend=1 00:05:01.695 --rc geninfo_all_blocks=1 00:05:01.695 --rc geninfo_unexecuted_blocks=1 00:05:01.695 00:05:01.695 ' 00:05:01.695 09:59:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.695 --rc genhtml_branch_coverage=1 00:05:01.695 --rc genhtml_function_coverage=1 00:05:01.695 --rc genhtml_legend=1 00:05:01.695 --rc geninfo_all_blocks=1 00:05:01.695 --rc geninfo_unexecuted_blocks=1 00:05:01.695 00:05:01.695 ' 00:05:01.695 09:59:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.695 --rc genhtml_branch_coverage=1 00:05:01.695 --rc genhtml_function_coverage=1 00:05:01.695 --rc genhtml_legend=1 00:05:01.695 --rc geninfo_all_blocks=1 00:05:01.695 --rc geninfo_unexecuted_blocks=1 00:05:01.695 00:05:01.695 ' 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c78e9af8-b39e-4b71-8f40-2b37c338158f 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c78e9af8-b39e-4b71-8f40-2b37c338158f 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.695 09:59:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.695 09:59:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.695 09:59:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.695 09:59:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.695 09:59:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:01.695 09:59:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.695 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.695 09:59:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.695 INFO: launching applications... 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.695 09:59:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57449 00:05:01.695 Waiting for target to run... 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.695 09:59:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57449 /var/tmp/spdk_tgt.sock 00:05:01.695 09:59:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57449 ']' 00:05:01.696 09:59:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.696 09:59:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.696 09:59:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.696 09:59:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.696 09:59:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.696 09:59:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.955 [2024-11-19 09:59:15.931508] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:01.955 [2024-11-19 09:59:15.931718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57449 ] 00:05:02.524 [2024-11-19 09:59:16.516072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.524 [2024-11-19 09:59:16.663923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.092 09:59:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.092 09:59:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:03.092 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:03.092 INFO: shutting down applications... 00:05:03.092 09:59:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:03.092 09:59:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57449 ]] 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57449 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:03.092 09:59:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.660 09:59:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.660 09:59:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.660 09:59:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:03.660 09:59:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.228 09:59:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.228 09:59:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.228 09:59:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:04.228 09:59:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.797 09:59:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.797 09:59:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.797 09:59:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:04.797 09:59:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.365 09:59:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.365 09:59:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.365 09:59:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:05.365 09:59:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.624 09:59:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.624 09:59:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.624 09:59:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:05.624 09:59:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57449 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:06.191 SPDK target shutdown done 00:05:06.191 Success 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.191 09:59:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.191 09:59:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:06.191 ************************************ 00:05:06.191 END TEST json_config_extra_key 00:05:06.191 ************************************ 00:05:06.191 00:05:06.191 real 0m4.726s 00:05:06.191 user 0m4.040s 00:05:06.191 sys 0m0.775s 00:05:06.191 09:59:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.191 09:59:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.191 09:59:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.191 09:59:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.191 09:59:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.191 09:59:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.191 ************************************ 00:05:06.191 START TEST alias_rpc 00:05:06.191 ************************************ 00:05:06.191 09:59:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.449 * Looking for test storage... 00:05:06.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:06.449 09:59:20 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.449 09:59:20 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.449 09:59:20 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.449 09:59:20 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.449 09:59:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.449 09:59:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.449 09:59:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.450 09:59:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.450 --rc genhtml_branch_coverage=1 00:05:06.450 --rc genhtml_function_coverage=1 00:05:06.450 --rc genhtml_legend=1 00:05:06.450 --rc geninfo_all_blocks=1 00:05:06.450 --rc geninfo_unexecuted_blocks=1 00:05:06.450 00:05:06.450 ' 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.450 --rc genhtml_branch_coverage=1 00:05:06.450 --rc genhtml_function_coverage=1 00:05:06.450 --rc genhtml_legend=1 00:05:06.450 --rc geninfo_all_blocks=1 00:05:06.450 --rc geninfo_unexecuted_blocks=1 00:05:06.450 00:05:06.450 ' 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.450 --rc genhtml_branch_coverage=1 00:05:06.450 --rc genhtml_function_coverage=1 00:05:06.450 --rc genhtml_legend=1 00:05:06.450 --rc geninfo_all_blocks=1 00:05:06.450 --rc geninfo_unexecuted_blocks=1 00:05:06.450 00:05:06.450 ' 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.450 --rc genhtml_branch_coverage=1 00:05:06.450 --rc genhtml_function_coverage=1 00:05:06.450 --rc genhtml_legend=1 00:05:06.450 --rc geninfo_all_blocks=1 00:05:06.450 --rc geninfo_unexecuted_blocks=1 00:05:06.450 00:05:06.450 ' 00:05:06.450 09:59:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.450 09:59:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57566 00:05:06.450 09:59:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57566 00:05:06.450 09:59:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57566 ']' 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.450 09:59:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.709 [2024-11-19 09:59:20.718975] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:06.709 [2024-11-19 09:59:20.719182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57566 ] 00:05:06.709 [2024-11-19 09:59:20.910083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.968 [2024-11-19 09:59:21.061569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.940 09:59:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.940 09:59:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.940 09:59:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:08.229 09:59:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57566 00:05:08.229 09:59:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57566 ']' 00:05:08.229 09:59:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57566 00:05:08.229 09:59:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.229 09:59:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.229 09:59:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57566 00:05:08.229 09:59:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.229 killing process with pid 57566 00:05:08.230 09:59:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.230 09:59:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57566' 00:05:08.230 09:59:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 57566 00:05:08.230 09:59:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 57566 00:05:10.780 00:05:10.780 real 0m4.278s 00:05:10.780 user 0m4.352s 00:05:10.780 sys 0m0.733s 00:05:10.780 09:59:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.780 09:59:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.780 ************************************ 00:05:10.780 END TEST alias_rpc 00:05:10.780 ************************************ 00:05:10.780 09:59:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.780 09:59:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.780 09:59:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.780 09:59:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.780 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.780 ************************************ 00:05:10.780 START TEST spdkcli_tcp 00:05:10.780 ************************************ 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.780 * Looking for test storage... 00:05:10.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.780 09:59:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.780 --rc genhtml_branch_coverage=1 00:05:10.780 --rc genhtml_function_coverage=1 00:05:10.780 --rc genhtml_legend=1 00:05:10.780 --rc geninfo_all_blocks=1 00:05:10.780 --rc geninfo_unexecuted_blocks=1 00:05:10.780 00:05:10.780 ' 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.780 --rc genhtml_branch_coverage=1 00:05:10.780 --rc genhtml_function_coverage=1 00:05:10.780 --rc genhtml_legend=1 00:05:10.780 --rc geninfo_all_blocks=1 00:05:10.780 --rc geninfo_unexecuted_blocks=1 00:05:10.780 00:05:10.780 ' 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.780 --rc genhtml_branch_coverage=1 00:05:10.780 --rc genhtml_function_coverage=1 00:05:10.780 --rc genhtml_legend=1 00:05:10.780 --rc geninfo_all_blocks=1 00:05:10.780 --rc geninfo_unexecuted_blocks=1 00:05:10.780 00:05:10.780 ' 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.780 --rc genhtml_branch_coverage=1 00:05:10.780 --rc genhtml_function_coverage=1 00:05:10.780 --rc genhtml_legend=1 00:05:10.780 --rc geninfo_all_blocks=1 00:05:10.780 --rc geninfo_unexecuted_blocks=1 00:05:10.780 00:05:10.780 ' 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57673 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.780 09:59:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57673 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57673 ']' 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.780 09:59:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.038 [2024-11-19 09:59:25.041602] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:11.038 [2024-11-19 09:59:25.041824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57673 ] 00:05:11.038 [2024-11-19 09:59:25.216688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.296 [2024-11-19 09:59:25.358734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.296 [2024-11-19 09:59:25.358752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.230 09:59:26 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.230 09:59:26 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:12.230 09:59:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57690 00:05:12.230 09:59:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:12.230 09:59:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:12.488 [ 00:05:12.488 "bdev_malloc_delete", 00:05:12.488 "bdev_malloc_create", 00:05:12.488 "bdev_null_resize", 00:05:12.488 "bdev_null_delete", 00:05:12.488 "bdev_null_create", 00:05:12.488 "bdev_nvme_cuse_unregister", 00:05:12.488 "bdev_nvme_cuse_register", 00:05:12.488 "bdev_opal_new_user", 00:05:12.488 "bdev_opal_set_lock_state", 00:05:12.488 "bdev_opal_delete", 00:05:12.488 "bdev_opal_get_info", 00:05:12.488 "bdev_opal_create", 00:05:12.488 "bdev_nvme_opal_revert", 00:05:12.488 "bdev_nvme_opal_init", 00:05:12.489 "bdev_nvme_send_cmd", 00:05:12.489 "bdev_nvme_set_keys", 00:05:12.489 "bdev_nvme_get_path_iostat", 00:05:12.489 "bdev_nvme_get_mdns_discovery_info", 00:05:12.489 "bdev_nvme_stop_mdns_discovery", 00:05:12.489 "bdev_nvme_start_mdns_discovery", 00:05:12.489 "bdev_nvme_set_multipath_policy", 00:05:12.489 "bdev_nvme_set_preferred_path", 00:05:12.489 "bdev_nvme_get_io_paths", 00:05:12.489 "bdev_nvme_remove_error_injection", 00:05:12.489 "bdev_nvme_add_error_injection", 00:05:12.489 "bdev_nvme_get_discovery_info", 00:05:12.489 "bdev_nvme_stop_discovery", 00:05:12.489 "bdev_nvme_start_discovery", 00:05:12.489 "bdev_nvme_get_controller_health_info", 00:05:12.489 "bdev_nvme_disable_controller", 00:05:12.489 "bdev_nvme_enable_controller", 00:05:12.489 "bdev_nvme_reset_controller", 00:05:12.489 "bdev_nvme_get_transport_statistics", 00:05:12.489 "bdev_nvme_apply_firmware", 00:05:12.489 "bdev_nvme_detach_controller", 00:05:12.489 "bdev_nvme_get_controllers", 00:05:12.489 "bdev_nvme_attach_controller", 00:05:12.489 "bdev_nvme_set_hotplug", 00:05:12.489 "bdev_nvme_set_options", 00:05:12.489 "bdev_passthru_delete", 00:05:12.489 "bdev_passthru_create", 00:05:12.489 "bdev_lvol_set_parent_bdev", 00:05:12.489 "bdev_lvol_set_parent", 00:05:12.489 "bdev_lvol_check_shallow_copy", 00:05:12.489 "bdev_lvol_start_shallow_copy", 00:05:12.489 "bdev_lvol_grow_lvstore", 00:05:12.489 "bdev_lvol_get_lvols", 00:05:12.489 "bdev_lvol_get_lvstores", 00:05:12.489 "bdev_lvol_delete", 00:05:12.489 "bdev_lvol_set_read_only", 00:05:12.489 "bdev_lvol_resize", 00:05:12.489 "bdev_lvol_decouple_parent", 00:05:12.489 "bdev_lvol_inflate", 00:05:12.489 "bdev_lvol_rename", 00:05:12.489 "bdev_lvol_clone_bdev", 00:05:12.489 "bdev_lvol_clone", 00:05:12.489 "bdev_lvol_snapshot", 00:05:12.489 "bdev_lvol_create", 00:05:12.489 "bdev_lvol_delete_lvstore", 00:05:12.489 "bdev_lvol_rename_lvstore", 00:05:12.489 "bdev_lvol_create_lvstore", 00:05:12.489 "bdev_raid_set_options", 00:05:12.489 "bdev_raid_remove_base_bdev", 00:05:12.489 "bdev_raid_add_base_bdev", 00:05:12.489 "bdev_raid_delete", 00:05:12.489 "bdev_raid_create", 00:05:12.489 "bdev_raid_get_bdevs", 00:05:12.489 "bdev_error_inject_error", 00:05:12.489 "bdev_error_delete", 00:05:12.489 "bdev_error_create", 00:05:12.489 "bdev_split_delete", 00:05:12.489 "bdev_split_create", 00:05:12.489 "bdev_delay_delete", 00:05:12.489 "bdev_delay_create", 00:05:12.489 "bdev_delay_update_latency", 00:05:12.489 "bdev_zone_block_delete", 00:05:12.489 "bdev_zone_block_create", 00:05:12.489 "blobfs_create", 00:05:12.489 "blobfs_detect", 00:05:12.489 "blobfs_set_cache_size", 00:05:12.489 "bdev_aio_delete", 00:05:12.489 "bdev_aio_rescan", 00:05:12.489 "bdev_aio_create", 00:05:12.489 "bdev_ftl_set_property", 00:05:12.489 "bdev_ftl_get_properties", 00:05:12.489 "bdev_ftl_get_stats", 00:05:12.489 "bdev_ftl_unmap", 00:05:12.489 "bdev_ftl_unload", 00:05:12.489 "bdev_ftl_delete", 00:05:12.489 "bdev_ftl_load", 00:05:12.489 "bdev_ftl_create", 00:05:12.489 "bdev_virtio_attach_controller", 00:05:12.489 "bdev_virtio_scsi_get_devices", 00:05:12.489 "bdev_virtio_detach_controller", 00:05:12.489 "bdev_virtio_blk_set_hotplug", 00:05:12.489 "bdev_iscsi_delete", 00:05:12.489 "bdev_iscsi_create", 00:05:12.489 "bdev_iscsi_set_options", 00:05:12.489 "accel_error_inject_error", 00:05:12.489 "ioat_scan_accel_module", 00:05:12.489 "dsa_scan_accel_module", 00:05:12.489 "iaa_scan_accel_module", 00:05:12.489 "keyring_file_remove_key", 00:05:12.489 "keyring_file_add_key", 00:05:12.489 "keyring_linux_set_options", 00:05:12.489 "fsdev_aio_delete", 00:05:12.489 "fsdev_aio_create", 00:05:12.489 "iscsi_get_histogram", 00:05:12.489 "iscsi_enable_histogram", 00:05:12.489 "iscsi_set_options", 00:05:12.489 "iscsi_get_auth_groups", 00:05:12.489 "iscsi_auth_group_remove_secret", 00:05:12.489 "iscsi_auth_group_add_secret", 00:05:12.489 "iscsi_delete_auth_group", 00:05:12.489 "iscsi_create_auth_group", 00:05:12.489 "iscsi_set_discovery_auth", 00:05:12.489 "iscsi_get_options", 00:05:12.489 "iscsi_target_node_request_logout", 00:05:12.489 "iscsi_target_node_set_redirect", 00:05:12.489 "iscsi_target_node_set_auth", 00:05:12.489 "iscsi_target_node_add_lun", 00:05:12.489 "iscsi_get_stats", 00:05:12.489 "iscsi_get_connections", 00:05:12.489 "iscsi_portal_group_set_auth", 00:05:12.489 "iscsi_start_portal_group", 00:05:12.489 "iscsi_delete_portal_group", 00:05:12.489 "iscsi_create_portal_group", 00:05:12.489 "iscsi_get_portal_groups", 00:05:12.489 "iscsi_delete_target_node", 00:05:12.489 "iscsi_target_node_remove_pg_ig_maps", 00:05:12.489 "iscsi_target_node_add_pg_ig_maps", 00:05:12.489 "iscsi_create_target_node", 00:05:12.489 "iscsi_get_target_nodes", 00:05:12.489 "iscsi_delete_initiator_group", 00:05:12.489 "iscsi_initiator_group_remove_initiators", 00:05:12.489 "iscsi_initiator_group_add_initiators", 00:05:12.489 "iscsi_create_initiator_group", 00:05:12.489 "iscsi_get_initiator_groups", 00:05:12.489 "nvmf_set_crdt", 00:05:12.489 "nvmf_set_config", 00:05:12.489 "nvmf_set_max_subsystems", 00:05:12.489 "nvmf_stop_mdns_prr", 00:05:12.489 "nvmf_publish_mdns_prr", 00:05:12.489 "nvmf_subsystem_get_listeners", 00:05:12.489 "nvmf_subsystem_get_qpairs", 00:05:12.489 "nvmf_subsystem_get_controllers", 00:05:12.489 "nvmf_get_stats", 00:05:12.489 "nvmf_get_transports", 00:05:12.489 "nvmf_create_transport", 00:05:12.489 "nvmf_get_targets", 00:05:12.489 "nvmf_delete_target", 00:05:12.489 "nvmf_create_target", 00:05:12.489 "nvmf_subsystem_allow_any_host", 00:05:12.489 "nvmf_subsystem_set_keys", 00:05:12.489 "nvmf_subsystem_remove_host", 00:05:12.489 "nvmf_subsystem_add_host", 00:05:12.489 "nvmf_ns_remove_host", 00:05:12.489 "nvmf_ns_add_host", 00:05:12.489 "nvmf_subsystem_remove_ns", 00:05:12.489 "nvmf_subsystem_set_ns_ana_group", 00:05:12.489 "nvmf_subsystem_add_ns", 00:05:12.489 "nvmf_subsystem_listener_set_ana_state", 00:05:12.489 "nvmf_discovery_get_referrals", 00:05:12.489 "nvmf_discovery_remove_referral", 00:05:12.489 "nvmf_discovery_add_referral", 00:05:12.489 "nvmf_subsystem_remove_listener", 00:05:12.489 "nvmf_subsystem_add_listener", 00:05:12.489 "nvmf_delete_subsystem", 00:05:12.489 "nvmf_create_subsystem", 00:05:12.489 "nvmf_get_subsystems", 00:05:12.489 "env_dpdk_get_mem_stats", 00:05:12.489 "nbd_get_disks", 00:05:12.489 "nbd_stop_disk", 00:05:12.489 "nbd_start_disk", 00:05:12.489 "ublk_recover_disk", 00:05:12.489 "ublk_get_disks", 00:05:12.489 "ublk_stop_disk", 00:05:12.489 "ublk_start_disk", 00:05:12.489 "ublk_destroy_target", 00:05:12.489 "ublk_create_target", 00:05:12.489 "virtio_blk_create_transport", 00:05:12.489 "virtio_blk_get_transports", 00:05:12.489 "vhost_controller_set_coalescing", 00:05:12.489 "vhost_get_controllers", 00:05:12.489 "vhost_delete_controller", 00:05:12.489 "vhost_create_blk_controller", 00:05:12.489 "vhost_scsi_controller_remove_target", 00:05:12.489 "vhost_scsi_controller_add_target", 00:05:12.489 "vhost_start_scsi_controller", 00:05:12.489 "vhost_create_scsi_controller", 00:05:12.489 "thread_set_cpumask", 00:05:12.489 "scheduler_set_options", 00:05:12.489 "framework_get_governor", 00:05:12.489 "framework_get_scheduler", 00:05:12.489 "framework_set_scheduler", 00:05:12.489 "framework_get_reactors", 00:05:12.489 "thread_get_io_channels", 00:05:12.489 "thread_get_pollers", 00:05:12.489 "thread_get_stats", 00:05:12.489 "framework_monitor_context_switch", 00:05:12.489 "spdk_kill_instance", 00:05:12.489 "log_enable_timestamps", 00:05:12.489 "log_get_flags", 00:05:12.489 "log_clear_flag", 00:05:12.489 "log_set_flag", 00:05:12.489 "log_get_level", 00:05:12.489 "log_set_level", 00:05:12.489 "log_get_print_level", 00:05:12.489 "log_set_print_level", 00:05:12.489 "framework_enable_cpumask_locks", 00:05:12.489 "framework_disable_cpumask_locks", 00:05:12.489 "framework_wait_init", 00:05:12.489 "framework_start_init", 00:05:12.489 "scsi_get_devices", 00:05:12.489 "bdev_get_histogram", 00:05:12.489 "bdev_enable_histogram", 00:05:12.489 "bdev_set_qos_limit", 00:05:12.489 "bdev_set_qd_sampling_period", 00:05:12.489 "bdev_get_bdevs", 00:05:12.489 "bdev_reset_iostat", 00:05:12.489 "bdev_get_iostat", 00:05:12.489 "bdev_examine", 00:05:12.489 "bdev_wait_for_examine", 00:05:12.489 "bdev_set_options", 00:05:12.489 "accel_get_stats", 00:05:12.489 "accel_set_options", 00:05:12.489 "accel_set_driver", 00:05:12.489 "accel_crypto_key_destroy", 00:05:12.489 "accel_crypto_keys_get", 00:05:12.489 "accel_crypto_key_create", 00:05:12.489 "accel_assign_opc", 00:05:12.489 "accel_get_module_info", 00:05:12.489 "accel_get_opc_assignments", 00:05:12.489 "vmd_rescan", 00:05:12.489 "vmd_remove_device", 00:05:12.489 "vmd_enable", 00:05:12.489 "sock_get_default_impl", 00:05:12.489 "sock_set_default_impl", 00:05:12.489 "sock_impl_set_options", 00:05:12.489 "sock_impl_get_options", 00:05:12.489 "iobuf_get_stats", 00:05:12.489 "iobuf_set_options", 00:05:12.489 "keyring_get_keys", 00:05:12.489 "framework_get_pci_devices", 00:05:12.489 "framework_get_config", 00:05:12.489 "framework_get_subsystems", 00:05:12.489 "fsdev_set_opts", 00:05:12.490 "fsdev_get_opts", 00:05:12.490 "trace_get_info", 00:05:12.490 "trace_get_tpoint_group_mask", 00:05:12.490 "trace_disable_tpoint_group", 00:05:12.490 "trace_enable_tpoint_group", 00:05:12.490 "trace_clear_tpoint_mask", 00:05:12.490 "trace_set_tpoint_mask", 00:05:12.490 "notify_get_notifications", 00:05:12.490 "notify_get_types", 00:05:12.490 "spdk_get_version", 00:05:12.490 "rpc_get_methods" 00:05:12.490 ] 00:05:12.490 09:59:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.490 09:59:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:12.490 09:59:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57673 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57673 ']' 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57673 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57673 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.490 killing process with pid 57673 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57673' 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57673 00:05:12.490 09:59:26 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57673 00:05:15.023 ************************************ 00:05:15.023 END TEST spdkcli_tcp 00:05:15.023 ************************************ 00:05:15.023 00:05:15.023 real 0m4.152s 00:05:15.023 user 0m7.459s 00:05:15.023 sys 0m0.766s 00:05:15.023 09:59:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.023 09:59:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.023 09:59:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.023 09:59:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.023 09:59:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.023 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.023 ************************************ 00:05:15.023 START TEST dpdk_mem_utility 00:05:15.023 ************************************ 00:05:15.023 09:59:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.023 * Looking for test storage... 00:05:15.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.023 09:59:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.023 --rc genhtml_branch_coverage=1 00:05:15.023 --rc genhtml_function_coverage=1 00:05:15.023 --rc genhtml_legend=1 00:05:15.023 --rc geninfo_all_blocks=1 00:05:15.023 --rc geninfo_unexecuted_blocks=1 00:05:15.023 00:05:15.023 ' 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.023 --rc genhtml_branch_coverage=1 00:05:15.023 --rc genhtml_function_coverage=1 00:05:15.023 --rc genhtml_legend=1 00:05:15.023 --rc geninfo_all_blocks=1 00:05:15.023 --rc geninfo_unexecuted_blocks=1 00:05:15.023 00:05:15.023 ' 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.023 --rc genhtml_branch_coverage=1 00:05:15.023 --rc genhtml_function_coverage=1 00:05:15.023 --rc genhtml_legend=1 00:05:15.023 --rc geninfo_all_blocks=1 00:05:15.023 --rc geninfo_unexecuted_blocks=1 00:05:15.023 00:05:15.023 ' 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.023 --rc genhtml_branch_coverage=1 00:05:15.023 --rc genhtml_function_coverage=1 00:05:15.023 --rc genhtml_legend=1 00:05:15.023 --rc geninfo_all_blocks=1 00:05:15.023 --rc geninfo_unexecuted_blocks=1 00:05:15.023 00:05:15.023 ' 00:05:15.023 09:59:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:15.023 09:59:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57795 00:05:15.023 09:59:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57795 00:05:15.023 09:59:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57795 ']' 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.023 09:59:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.282 [2024-11-19 09:59:29.266555] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:15.282 [2024-11-19 09:59:29.266753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57795 ] 00:05:15.282 [2024-11-19 09:59:29.450505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.541 [2024-11-19 09:59:29.584214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.480 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.480 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:16.480 09:59:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.480 09:59:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.480 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.480 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.480 { 00:05:16.480 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.480 } 00:05:16.480 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.480 09:59:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.480 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:16.480 1 heaps totaling size 816.000000 MiB 00:05:16.480 size: 816.000000 MiB heap id: 0 00:05:16.480 end heaps---------- 00:05:16.480 9 mempools totaling size 595.772034 MiB 00:05:16.480 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.480 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.480 size: 92.545471 MiB name: bdev_io_57795 00:05:16.480 size: 50.003479 MiB name: msgpool_57795 00:05:16.480 size: 36.509338 MiB name: fsdev_io_57795 00:05:16.480 size: 21.763794 MiB name: PDU_Pool 00:05:16.480 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.480 size: 4.133484 MiB name: evtpool_57795 00:05:16.480 size: 0.026123 MiB name: Session_Pool 00:05:16.480 end mempools------- 00:05:16.480 6 memzones totaling size 4.142822 MiB 00:05:16.480 size: 1.000366 MiB name: RG_ring_0_57795 00:05:16.480 size: 1.000366 MiB name: RG_ring_1_57795 00:05:16.480 size: 1.000366 MiB name: RG_ring_4_57795 00:05:16.480 size: 1.000366 MiB name: RG_ring_5_57795 00:05:16.480 size: 0.125366 MiB name: RG_ring_2_57795 00:05:16.480 size: 0.015991 MiB name: RG_ring_3_57795 00:05:16.480 end memzones------- 00:05:16.480 09:59:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.480 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:05:16.480 list of free elements. size: 16.790649 MiB 00:05:16.480 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:16.480 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:16.480 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:16.480 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:16.480 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:16.480 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:16.480 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:16.480 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:16.480 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:16.480 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:16.480 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:16.480 element at address: 0x20001ac00000 with size: 0.561218 MiB 00:05:16.480 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:16.480 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:16.480 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:16.480 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:16.480 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:16.480 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:16.480 list of standard malloc elements. size: 199.288452 MiB 00:05:16.480 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:16.480 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:16.480 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:16.481 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:16.481 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:16.481 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:16.481 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:16.481 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:16.481 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:16.481 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:16.481 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:16.481 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:16.481 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:16.481 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:16.482 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:16.482 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:16.482 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:16.482 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:16.482 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:16.483 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:16.483 list of memzone associated elements. size: 599.920898 MiB 00:05:16.483 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:16.483 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.483 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:16.483 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.483 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:16.483 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57795_0 00:05:16.483 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:16.483 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57795_0 00:05:16.483 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:16.483 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57795_0 00:05:16.483 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:16.483 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.483 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:16.483 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.483 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:16.483 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57795_0 00:05:16.483 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:16.483 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57795 00:05:16.483 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:16.483 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57795 00:05:16.483 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:16.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.483 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:16.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.483 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:16.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.483 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:16.483 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.483 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:16.483 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57795 00:05:16.483 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:16.483 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57795 00:05:16.483 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:16.483 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57795 00:05:16.483 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:16.483 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57795 00:05:16.483 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:16.483 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57795 00:05:16.483 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:16.483 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57795 00:05:16.483 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:16.483 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.483 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:16.483 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.483 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:16.483 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.483 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:16.483 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57795 00:05:16.483 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:16.483 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57795 00:05:16.483 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:16.483 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.483 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:16.483 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.483 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:16.483 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57795 00:05:16.483 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:16.483 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.483 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:16.483 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57795 00:05:16.483 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:16.483 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57795 00:05:16.483 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:16.483 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57795 00:05:16.483 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:16.483 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.483 09:59:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.483 09:59:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57795 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57795 ']' 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57795 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57795 00:05:16.483 killing process with pid 57795 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57795' 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57795 00:05:16.483 09:59:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57795 00:05:19.026 00:05:19.026 real 0m3.916s 00:05:19.026 user 0m3.832s 00:05:19.026 sys 0m0.721s 00:05:19.026 09:59:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.026 09:59:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.026 ************************************ 00:05:19.026 END TEST dpdk_mem_utility 00:05:19.026 ************************************ 00:05:19.026 09:59:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:19.026 09:59:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.026 09:59:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.026 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.026 ************************************ 00:05:19.026 START TEST event 00:05:19.026 ************************************ 00:05:19.026 09:59:32 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:19.026 * Looking for test storage... 00:05:19.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:19.026 09:59:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.026 09:59:32 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.026 09:59:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.026 09:59:33 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.026 09:59:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.026 09:59:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.026 09:59:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.026 09:59:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.026 09:59:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.026 09:59:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.026 09:59:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.026 09:59:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.026 09:59:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.026 09:59:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.026 09:59:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.026 09:59:33 event -- scripts/common.sh@344 -- # case "$op" in 00:05:19.026 09:59:33 event -- scripts/common.sh@345 -- # : 1 00:05:19.026 09:59:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.026 09:59:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.026 09:59:33 event -- scripts/common.sh@365 -- # decimal 1 00:05:19.026 09:59:33 event -- scripts/common.sh@353 -- # local d=1 00:05:19.026 09:59:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.026 09:59:33 event -- scripts/common.sh@355 -- # echo 1 00:05:19.026 09:59:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.026 09:59:33 event -- scripts/common.sh@366 -- # decimal 2 00:05:19.026 09:59:33 event -- scripts/common.sh@353 -- # local d=2 00:05:19.026 09:59:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.026 09:59:33 event -- scripts/common.sh@355 -- # echo 2 00:05:19.026 09:59:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.026 09:59:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.026 09:59:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.026 09:59:33 event -- scripts/common.sh@368 -- # return 0 00:05:19.026 09:59:33 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.026 09:59:33 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.026 --rc genhtml_branch_coverage=1 00:05:19.026 --rc genhtml_function_coverage=1 00:05:19.026 --rc genhtml_legend=1 00:05:19.026 --rc geninfo_all_blocks=1 00:05:19.026 --rc geninfo_unexecuted_blocks=1 00:05:19.026 00:05:19.026 ' 00:05:19.026 09:59:33 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.027 --rc genhtml_branch_coverage=1 00:05:19.027 --rc genhtml_function_coverage=1 00:05:19.027 --rc genhtml_legend=1 00:05:19.027 --rc geninfo_all_blocks=1 00:05:19.027 --rc geninfo_unexecuted_blocks=1 00:05:19.027 00:05:19.027 ' 00:05:19.027 09:59:33 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.027 --rc genhtml_branch_coverage=1 00:05:19.027 --rc genhtml_function_coverage=1 00:05:19.027 --rc genhtml_legend=1 00:05:19.027 --rc geninfo_all_blocks=1 00:05:19.027 --rc geninfo_unexecuted_blocks=1 00:05:19.027 00:05:19.027 ' 00:05:19.027 09:59:33 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.027 --rc genhtml_branch_coverage=1 00:05:19.027 --rc genhtml_function_coverage=1 00:05:19.027 --rc genhtml_legend=1 00:05:19.027 --rc geninfo_all_blocks=1 00:05:19.027 --rc geninfo_unexecuted_blocks=1 00:05:19.027 00:05:19.027 ' 00:05:19.027 09:59:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:19.027 09:59:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.027 09:59:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.027 09:59:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:19.027 09:59:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.027 09:59:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.027 ************************************ 00:05:19.027 START TEST event_perf 00:05:19.027 ************************************ 00:05:19.027 09:59:33 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.027 Running I/O for 1 seconds...[2024-11-19 09:59:33.140737] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:19.027 [2024-11-19 09:59:33.141026] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57903 ] 00:05:19.286 [2024-11-19 09:59:33.313371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.286 [2024-11-19 09:59:33.463898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.286 [2024-11-19 09:59:33.464052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.286 Running I/O for 1 seconds...[2024-11-19 09:59:33.464171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.286 [2024-11-19 09:59:33.464190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.664 00:05:20.664 lcore 0: 204362 00:05:20.664 lcore 1: 204358 00:05:20.664 lcore 2: 204360 00:05:20.664 lcore 3: 204360 00:05:20.664 done. 00:05:20.664 00:05:20.664 real 0m1.612s 00:05:20.664 user 0m4.355s 00:05:20.664 sys 0m0.132s 00:05:20.664 ************************************ 00:05:20.664 END TEST event_perf 00:05:20.664 ************************************ 00:05:20.664 09:59:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.664 09:59:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.664 09:59:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.664 09:59:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.664 09:59:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.664 09:59:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.664 ************************************ 00:05:20.664 START TEST event_reactor 00:05:20.664 ************************************ 00:05:20.664 09:59:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.664 [2024-11-19 09:59:34.805833] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:20.664 [2024-11-19 09:59:34.806193] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57937 ] 00:05:20.922 [2024-11-19 09:59:34.983307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.922 [2024-11-19 09:59:35.123836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.298 test_start 00:05:22.298 oneshot 00:05:22.298 tick 100 00:05:22.298 tick 100 00:05:22.298 tick 250 00:05:22.298 tick 100 00:05:22.298 tick 100 00:05:22.298 tick 100 00:05:22.298 tick 250 00:05:22.298 tick 500 00:05:22.298 tick 100 00:05:22.298 tick 100 00:05:22.298 tick 250 00:05:22.298 tick 100 00:05:22.298 tick 100 00:05:22.298 test_end 00:05:22.298 00:05:22.298 real 0m1.587s 00:05:22.298 user 0m1.382s 00:05:22.298 sys 0m0.097s 00:05:22.298 09:59:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.298 09:59:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.299 ************************************ 00:05:22.299 END TEST event_reactor 00:05:22.299 ************************************ 00:05:22.299 09:59:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.299 09:59:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.299 09:59:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.299 09:59:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.299 ************************************ 00:05:22.299 START TEST event_reactor_perf 00:05:22.299 ************************************ 00:05:22.299 09:59:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.299 [2024-11-19 09:59:36.453845] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:22.299 [2024-11-19 09:59:36.454010] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57979 ] 00:05:22.557 [2024-11-19 09:59:36.633900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.557 [2024-11-19 09:59:36.770762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.934 test_start 00:05:23.934 test_end 00:05:23.934 Performance: 330215 events per second 00:05:23.934 00:05:23.934 real 0m1.587s 00:05:23.934 user 0m1.365s 00:05:23.934 sys 0m0.113s 00:05:23.934 09:59:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.934 ************************************ 00:05:23.934 END TEST event_reactor_perf 00:05:23.934 ************************************ 00:05:23.934 09:59:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.934 09:59:38 event -- event/event.sh@49 -- # uname -s 00:05:23.934 09:59:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.934 09:59:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.934 09:59:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.934 09:59:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.934 09:59:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.934 ************************************ 00:05:23.934 START TEST event_scheduler 00:05:23.934 ************************************ 00:05:23.934 09:59:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.934 * Looking for test storage... 00:05:23.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:23.934 09:59:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.934 09:59:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.934 09:59:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.193 09:59:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.193 09:59:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 09:59:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.194 09:59:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58055 00:05:24.194 09:59:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.194 09:59:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.194 09:59:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58055 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58055 ']' 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.194 09:59:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.194 [2024-11-19 09:59:38.352160] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:24.194 [2024-11-19 09:59:38.352375] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58055 ] 00:05:24.453 [2024-11-19 09:59:38.544075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.712 [2024-11-19 09:59:38.716312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.712 [2024-11-19 09:59:38.716497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.712 [2024-11-19 09:59:38.716622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.712 [2024-11-19 09:59:38.716842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:25.282 09:59:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.282 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.282 POWER: Cannot set governor of lcore 0 to performance 00:05:25.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.282 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.282 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.282 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:25.282 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:25.282 POWER: Unable to set Power Management Environment for lcore 0 00:05:25.282 [2024-11-19 09:59:39.342920] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:25.282 [2024-11-19 09:59:39.342951] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:25.282 [2024-11-19 09:59:39.342967] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.282 [2024-11-19 09:59:39.342994] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.282 [2024-11-19 09:59:39.343007] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.282 [2024-11-19 09:59:39.343022] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.282 09:59:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.282 09:59:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 [2024-11-19 09:59:39.660775] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.541 09:59:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.541 09:59:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.541 09:59:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 ************************************ 00:05:25.541 START TEST scheduler_create_thread 00:05:25.541 ************************************ 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 2 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 3 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 4 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 5 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 6 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 7 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 8 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 9 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 10 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.541 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.800 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.800 09:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:25.800 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.800 09:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.196 09:59:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.196 09:59:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.196 09:59:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.196 09:59:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.196 09:59:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.175 09:59:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.175 00:05:28.175 real 0m2.621s 00:05:28.175 user 0m0.021s 00:05:28.175 sys 0m0.007s 00:05:28.175 09:59:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.175 09:59:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.175 ************************************ 00:05:28.175 END TEST scheduler_create_thread 00:05:28.175 ************************************ 00:05:28.175 09:59:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.175 09:59:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58055 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58055 ']' 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58055 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58055 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:28.175 killing process with pid 58055 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58055' 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58055 00:05:28.175 09:59:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58055 00:05:28.744 [2024-11-19 09:59:42.775588] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.682 00:05:29.682 real 0m5.787s 00:05:29.682 user 0m10.048s 00:05:29.682 sys 0m0.563s 00:05:29.682 09:59:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.682 09:59:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.682 ************************************ 00:05:29.682 END TEST event_scheduler 00:05:29.682 ************************************ 00:05:29.682 09:59:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.682 09:59:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.682 09:59:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.682 09:59:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.682 09:59:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.682 ************************************ 00:05:29.682 START TEST app_repeat 00:05:29.682 ************************************ 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58161 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.682 Process app_repeat pid: 58161 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58161' 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.682 spdk_app_start Round 0 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.682 09:59:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58161 /var/tmp/spdk-nbd.sock 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58161 ']' 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.682 09:59:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.941 [2024-11-19 09:59:43.973869] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:29.941 [2024-11-19 09:59:43.974085] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58161 ] 00:05:29.941 [2024-11-19 09:59:44.162020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.198 [2024-11-19 09:59:44.305029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.198 [2024-11-19 09:59:44.305039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.767 09:59:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.767 09:59:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.767 09:59:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.026 Malloc0 00:05:31.026 09:59:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.285 Malloc1 00:05:31.545 09:59:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.545 09:59:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.545 /dev/nbd0 00:05:31.804 09:59:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.804 09:59:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.804 1+0 records in 00:05:31.804 1+0 records out 00:05:31.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374233 s, 10.9 MB/s 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.804 09:59:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:31.805 09:59:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:31.805 09:59:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.805 09:59:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.805 09:59:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.064 /dev/nbd1 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.064 1+0 records in 00:05:32.064 1+0 records out 00:05:32.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282629 s, 14.5 MB/s 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.064 09:59:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.064 09:59:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.324 { 00:05:32.324 "nbd_device": "/dev/nbd0", 00:05:32.324 "bdev_name": "Malloc0" 00:05:32.324 }, 00:05:32.324 { 00:05:32.324 "nbd_device": "/dev/nbd1", 00:05:32.324 "bdev_name": "Malloc1" 00:05:32.324 } 00:05:32.324 ]' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.324 { 00:05:32.324 "nbd_device": "/dev/nbd0", 00:05:32.324 "bdev_name": "Malloc0" 00:05:32.324 }, 00:05:32.324 { 00:05:32.324 "nbd_device": "/dev/nbd1", 00:05:32.324 "bdev_name": "Malloc1" 00:05:32.324 } 00:05:32.324 ]' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.324 /dev/nbd1' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.324 /dev/nbd1' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.324 256+0 records in 00:05:32.324 256+0 records out 00:05:32.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00791894 s, 132 MB/s 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.324 256+0 records in 00:05:32.324 256+0 records out 00:05:32.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026602 s, 39.4 MB/s 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.324 256+0 records in 00:05:32.324 256+0 records out 00:05:32.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319435 s, 32.8 MB/s 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.324 09:59:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.893 09:59:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.153 09:59:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.412 09:59:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.412 09:59:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.979 09:59:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.917 [2024-11-19 09:59:48.940860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.917 [2024-11-19 09:59:49.042714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.917 [2024-11-19 09:59:49.042726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.175 [2024-11-19 09:59:49.232006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.175 [2024-11-19 09:59:49.232068] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.078 09:59:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.078 spdk_app_start Round 1 00:05:37.078 09:59:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.078 09:59:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58161 /var/tmp/spdk-nbd.sock 00:05:37.078 09:59:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58161 ']' 00:05:37.078 09:59:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.078 09:59:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.078 09:59:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.078 09:59:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.078 09:59:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.078 09:59:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.078 09:59:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.078 09:59:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.337 Malloc0 00:05:37.337 09:59:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.596 Malloc1 00:05:37.596 09:59:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.596 09:59:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.165 /dev/nbd0 00:05:38.165 09:59:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.165 09:59:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.165 1+0 records in 00:05:38.165 1+0 records out 00:05:38.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274726 s, 14.9 MB/s 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.165 09:59:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.165 09:59:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.165 09:59:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.165 09:59:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.424 /dev/nbd1 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.425 1+0 records in 00:05:38.425 1+0 records out 00:05:38.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026365 s, 15.5 MB/s 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.425 09:59:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.425 09:59:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.684 { 00:05:38.684 "nbd_device": "/dev/nbd0", 00:05:38.684 "bdev_name": "Malloc0" 00:05:38.684 }, 00:05:38.684 { 00:05:38.684 "nbd_device": "/dev/nbd1", 00:05:38.684 "bdev_name": "Malloc1" 00:05:38.684 } 00:05:38.684 ]' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.684 { 00:05:38.684 "nbd_device": "/dev/nbd0", 00:05:38.684 "bdev_name": "Malloc0" 00:05:38.684 }, 00:05:38.684 { 00:05:38.684 "nbd_device": "/dev/nbd1", 00:05:38.684 "bdev_name": "Malloc1" 00:05:38.684 } 00:05:38.684 ]' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.684 /dev/nbd1' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.684 /dev/nbd1' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.684 256+0 records in 00:05:38.684 256+0 records out 00:05:38.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700804 s, 150 MB/s 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.684 256+0 records in 00:05:38.684 256+0 records out 00:05:38.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235613 s, 44.5 MB/s 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.684 256+0 records in 00:05:38.684 256+0 records out 00:05:38.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283585 s, 37.0 MB/s 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.684 09:59:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.943 09:59:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.202 09:59:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.203 09:59:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.203 09:59:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.461 09:59:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.461 09:59:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.461 09:59:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.461 09:59:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.462 09:59:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.462 09:59:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.030 09:59:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.967 [2024-11-19 09:59:55.004270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.967 [2024-11-19 09:59:55.118019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.967 [2024-11-19 09:59:55.118023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.226 [2024-11-19 09:59:55.298384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.226 [2024-11-19 09:59:55.298454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.136 spdk_app_start Round 2 00:05:43.136 09:59:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.136 09:59:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.136 09:59:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58161 /var/tmp/spdk-nbd.sock 00:05:43.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58161 ']' 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.137 09:59:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.137 09:59:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.704 Malloc0 00:05:43.704 09:59:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.964 Malloc1 00:05:43.964 09:59:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.964 09:59:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.223 /dev/nbd0 00:05:44.223 09:59:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.223 09:59:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.223 1+0 records in 00:05:44.223 1+0 records out 00:05:44.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280692 s, 14.6 MB/s 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.223 09:59:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.223 09:59:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.223 09:59:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.223 09:59:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.482 /dev/nbd1 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.482 1+0 records in 00:05:44.482 1+0 records out 00:05:44.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392272 s, 10.4 MB/s 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.482 09:59:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.482 09:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.741 09:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.741 { 00:05:44.741 "nbd_device": "/dev/nbd0", 00:05:44.741 "bdev_name": "Malloc0" 00:05:44.741 }, 00:05:44.741 { 00:05:44.741 "nbd_device": "/dev/nbd1", 00:05:44.741 "bdev_name": "Malloc1" 00:05:44.741 } 00:05:44.741 ]' 00:05:44.741 09:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.741 { 00:05:44.741 "nbd_device": "/dev/nbd0", 00:05:44.741 "bdev_name": "Malloc0" 00:05:44.741 }, 00:05:44.741 { 00:05:44.741 "nbd_device": "/dev/nbd1", 00:05:44.741 "bdev_name": "Malloc1" 00:05:44.741 } 00:05:44.741 ]' 00:05:44.741 09:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.001 /dev/nbd1' 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.001 /dev/nbd1' 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.001 256+0 records in 00:05:45.001 256+0 records out 00:05:45.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00859881 s, 122 MB/s 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.001 09:59:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.001 256+0 records in 00:05:45.001 256+0 records out 00:05:45.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233884 s, 44.8 MB/s 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.001 256+0 records in 00:05:45.001 256+0 records out 00:05:45.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327018 s, 32.1 MB/s 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.001 09:59:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.260 09:59:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.520 09:59:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.780 09:59:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.780 09:59:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.348 10:00:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.286 [2024-11-19 10:00:01.333310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.286 [2024-11-19 10:00:01.429674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.286 [2024-11-19 10:00:01.429682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.544 [2024-11-19 10:00:01.614651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.545 [2024-11-19 10:00:01.614757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.448 10:00:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58161 /var/tmp/spdk-nbd.sock 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58161 ']' 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.448 10:00:03 event.app_repeat -- event/event.sh@39 -- # killprocess 58161 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58161 ']' 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58161 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.448 10:00:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58161 00:05:49.707 killing process with pid 58161 00:05:49.707 10:00:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.707 10:00:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.707 10:00:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58161' 00:05:49.707 10:00:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58161 00:05:49.707 10:00:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58161 00:05:50.646 spdk_app_start is called in Round 0. 00:05:50.646 Shutdown signal received, stop current app iteration 00:05:50.646 Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 reinitialization... 00:05:50.646 spdk_app_start is called in Round 1. 00:05:50.646 Shutdown signal received, stop current app iteration 00:05:50.646 Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 reinitialization... 00:05:50.646 spdk_app_start is called in Round 2. 00:05:50.646 Shutdown signal received, stop current app iteration 00:05:50.646 Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 reinitialization... 00:05:50.646 spdk_app_start is called in Round 3. 00:05:50.646 Shutdown signal received, stop current app iteration 00:05:50.646 10:00:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.646 10:00:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.646 00:05:50.646 real 0m20.655s 00:05:50.646 user 0m45.284s 00:05:50.646 sys 0m3.083s 00:05:50.646 ************************************ 00:05:50.646 END TEST app_repeat 00:05:50.646 ************************************ 00:05:50.646 10:00:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.646 10:00:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.646 10:00:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.646 10:00:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:50.646 10:00:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.646 10:00:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.646 10:00:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.646 ************************************ 00:05:50.646 START TEST cpu_locks 00:05:50.646 ************************************ 00:05:50.646 10:00:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:50.646 * Looking for test storage... 00:05:50.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:50.646 10:00:04 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.646 10:00:04 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.646 10:00:04 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.646 10:00:04 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.646 10:00:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.647 10:00:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.647 --rc genhtml_branch_coverage=1 00:05:50.647 --rc genhtml_function_coverage=1 00:05:50.647 --rc genhtml_legend=1 00:05:50.647 --rc geninfo_all_blocks=1 00:05:50.647 --rc geninfo_unexecuted_blocks=1 00:05:50.647 00:05:50.647 ' 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.647 --rc genhtml_branch_coverage=1 00:05:50.647 --rc genhtml_function_coverage=1 00:05:50.647 --rc genhtml_legend=1 00:05:50.647 --rc geninfo_all_blocks=1 00:05:50.647 --rc geninfo_unexecuted_blocks=1 00:05:50.647 00:05:50.647 ' 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.647 --rc genhtml_branch_coverage=1 00:05:50.647 --rc genhtml_function_coverage=1 00:05:50.647 --rc genhtml_legend=1 00:05:50.647 --rc geninfo_all_blocks=1 00:05:50.647 --rc geninfo_unexecuted_blocks=1 00:05:50.647 00:05:50.647 ' 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.647 --rc genhtml_branch_coverage=1 00:05:50.647 --rc genhtml_function_coverage=1 00:05:50.647 --rc genhtml_legend=1 00:05:50.647 --rc geninfo_all_blocks=1 00:05:50.647 --rc geninfo_unexecuted_blocks=1 00:05:50.647 00:05:50.647 ' 00:05:50.647 10:00:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.647 10:00:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.647 10:00:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.647 10:00:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.647 10:00:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.647 ************************************ 00:05:50.647 START TEST default_locks 00:05:50.647 ************************************ 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58625 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58625 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58625 ']' 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.647 10:00:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.906 [2024-11-19 10:00:04.912571] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:50.906 [2024-11-19 10:00:04.913049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58625 ] 00:05:50.906 [2024-11-19 10:00:05.086614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.165 [2024-11-19 10:00:05.212082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.102 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.102 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:52.102 10:00:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58625 00:05:52.102 10:00:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58625 00:05:52.102 10:00:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58625 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58625 ']' 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58625 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58625 00:05:52.362 killing process with pid 58625 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58625' 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58625 00:05:52.362 10:00:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58625 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58625 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58625 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58625 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58625 ']' 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.269 ERROR: process (pid: 58625) is no longer running 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58625) - No such process 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.269 00:05:54.269 real 0m3.685s 00:05:54.269 user 0m3.616s 00:05:54.269 sys 0m0.799s 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.269 ************************************ 00:05:54.269 END TEST default_locks 00:05:54.269 ************************************ 00:05:54.269 10:00:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.528 10:00:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:54.528 10:00:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.528 10:00:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.529 10:00:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.529 ************************************ 00:05:54.529 START TEST default_locks_via_rpc 00:05:54.529 ************************************ 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58694 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58694 00:05:54.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58694 ']' 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.529 10:00:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.529 [2024-11-19 10:00:08.650058] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:54.529 [2024-11-19 10:00:08.650224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:05:54.788 [2024-11-19 10:00:08.812147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.788 [2024-11-19 10:00:08.933123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58694 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58694 00:05:55.725 10:00:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.984 10:00:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58694 00:05:55.984 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58694 ']' 00:05:55.984 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58694 00:05:55.984 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58694 00:05:56.242 killing process with pid 58694 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58694' 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58694 00:05:56.242 10:00:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58694 00:05:58.149 00:05:58.149 real 0m3.678s 00:05:58.149 user 0m3.678s 00:05:58.149 sys 0m0.782s 00:05:58.149 10:00:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.149 ************************************ 00:05:58.149 END TEST default_locks_via_rpc 00:05:58.149 ************************************ 00:05:58.149 10:00:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.149 10:00:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.149 10:00:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.149 10:00:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.149 10:00:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.149 ************************************ 00:05:58.149 START TEST non_locking_app_on_locked_coremask 00:05:58.149 ************************************ 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58763 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58763 /var/tmp/spdk.sock 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58763 ']' 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.149 10:00:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.409 [2024-11-19 10:00:12.413917] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:58.409 [2024-11-19 10:00:12.414423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58763 ] 00:05:58.409 [2024-11-19 10:00:12.594668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.668 [2024-11-19 10:00:12.710898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58783 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58783 /var/tmp/spdk2.sock 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58783 ']' 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.607 10:00:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.607 [2024-11-19 10:00:13.654643] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:05:59.607 [2024-11-19 10:00:13.655210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58783 ] 00:05:59.865 [2024-11-19 10:00:13.863971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.865 [2024-11-19 10:00:13.864057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.123 [2024-11-19 10:00:14.129558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.698 10:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.698 10:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.698 10:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58763 00:06:02.698 10:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58763 00:06:02.698 10:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.266 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58763 00:06:03.266 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58763 ']' 00:06:03.266 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58763 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58763 00:06:03.267 killing process with pid 58763 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58763' 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58763 00:06:03.267 10:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58763 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58783 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58783 ']' 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58783 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58783 00:06:07.460 killing process with pid 58783 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58783' 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58783 00:06:07.460 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58783 00:06:09.364 00:06:09.364 real 0m11.009s 00:06:09.364 user 0m11.396s 00:06:09.364 sys 0m1.623s 00:06:09.364 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.364 ************************************ 00:06:09.364 END TEST non_locking_app_on_locked_coremask 00:06:09.364 ************************************ 00:06:09.364 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 10:00:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:09.364 10:00:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.364 10:00:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.364 10:00:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 ************************************ 00:06:09.364 START TEST locking_app_on_unlocked_coremask 00:06:09.364 ************************************ 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58929 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58929 /var/tmp/spdk.sock 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58929 ']' 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.364 10:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 [2024-11-19 10:00:23.472219] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:09.364 [2024-11-19 10:00:23.472728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58929 ] 00:06:09.622 [2024-11-19 10:00:23.654571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.622 [2024-11-19 10:00:23.654619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.622 [2024-11-19 10:00:23.765393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58945 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58945 /var/tmp/spdk2.sock 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58945 ']' 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.557 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.557 [2024-11-19 10:00:24.672968] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:10.557 [2024-11-19 10:00:24.673412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:06:10.815 [2024-11-19 10:00:24.859917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.074 [2024-11-19 10:00:25.115260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.607 10:00:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.607 10:00:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.607 10:00:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58945 00:06:13.607 10:00:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58945 00:06:13.607 10:00:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58929 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58929 ']' 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58929 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58929 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.173 killing process with pid 58929 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58929' 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58929 00:06:14.173 10:00:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58929 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58945 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58945 ']' 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58945 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58945 00:06:18.418 killing process with pid 58945 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58945' 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58945 00:06:18.418 10:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58945 00:06:20.954 00:06:20.954 real 0m11.337s 00:06:20.954 user 0m11.653s 00:06:20.954 sys 0m1.635s 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.954 ************************************ 00:06:20.954 END TEST locking_app_on_unlocked_coremask 00:06:20.954 ************************************ 00:06:20.954 10:00:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:20.954 10:00:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.954 10:00:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.954 10:00:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.954 ************************************ 00:06:20.954 START TEST locking_app_on_locked_coremask 00:06:20.954 ************************************ 00:06:20.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59096 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59096 /var/tmp/spdk.sock 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59096 ']' 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.954 10:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.954 [2024-11-19 10:00:34.833658] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:20.954 [2024-11-19 10:00:34.833833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59096 ] 00:06:20.954 [2024-11-19 10:00:35.003470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.954 [2024-11-19 10:00:35.129903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59112 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59112 /var/tmp/spdk2.sock 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59112 /var/tmp/spdk2.sock 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:21.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.893 10:00:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59112 /var/tmp/spdk2.sock 00:06:21.893 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:06:21.893 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.893 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.893 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.893 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.893 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.152 [2024-11-19 10:00:36.134882] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:22.152 [2024-11-19 10:00:36.135081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:06:22.152 [2024-11-19 10:00:36.335563] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59096 has claimed it. 00:06:22.152 [2024-11-19 10:00:36.335680] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.720 ERROR: process (pid: 59112) is no longer running 00:06:22.720 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59112) - No such process 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59096 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59096 00:06:22.720 10:00:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59096 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59096 ']' 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59096 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59096 00:06:22.980 killing process with pid 59096 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59096' 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59096 00:06:22.980 10:00:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59096 00:06:25.517 00:06:25.517 real 0m4.547s 00:06:25.517 user 0m4.826s 00:06:25.517 sys 0m0.950s 00:06:25.517 10:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.517 ************************************ 00:06:25.517 END TEST locking_app_on_locked_coremask 00:06:25.517 ************************************ 00:06:25.517 10:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.517 10:00:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.518 10:00:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.518 10:00:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.518 10:00:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.518 ************************************ 00:06:25.518 START TEST locking_overlapped_coremask 00:06:25.518 ************************************ 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59177 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59177 /var/tmp/spdk.sock 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59177 ']' 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.518 10:00:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.518 [2024-11-19 10:00:39.470668] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:25.518 [2024-11-19 10:00:39.470887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:06:25.518 [2024-11-19 10:00:39.655544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.777 [2024-11-19 10:00:39.785751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.777 [2024-11-19 10:00:39.785880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.777 [2024-11-19 10:00:39.785899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59201 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59201 /var/tmp/spdk2.sock 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59201 /var/tmp/spdk2.sock 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59201 /var/tmp/spdk2.sock 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59201 ']' 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.714 10:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.714 [2024-11-19 10:00:40.796722] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:26.714 [2024-11-19 10:00:40.797209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:06:26.974 [2024-11-19 10:00:40.993823] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59177 has claimed it. 00:06:26.974 [2024-11-19 10:00:40.993932] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.233 ERROR: process (pid: 59201) is no longer running 00:06:27.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59201) - No such process 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59177 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59177 ']' 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59177 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.233 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59177 00:06:27.492 killing process with pid 59177 00:06:27.492 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.492 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.492 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59177' 00:06:27.492 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59177 00:06:27.492 10:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59177 00:06:30.030 00:06:30.030 real 0m4.314s 00:06:30.030 user 0m11.581s 00:06:30.030 sys 0m0.766s 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.030 ************************************ 00:06:30.030 END TEST locking_overlapped_coremask 00:06:30.030 ************************************ 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.030 10:00:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.030 10:00:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.030 10:00:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.030 10:00:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.030 ************************************ 00:06:30.030 START TEST locking_overlapped_coremask_via_rpc 00:06:30.030 ************************************ 00:06:30.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59265 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59265 ']' 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.030 10:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.030 [2024-11-19 10:00:43.838827] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:30.030 [2024-11-19 10:00:43.839831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:06:30.030 [2024-11-19 10:00:44.022960] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.030 [2024-11-19 10:00:44.023009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.030 [2024-11-19 10:00:44.151312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.030 [2024-11-19 10:00:44.151434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.030 [2024-11-19 10:00:44.151457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59288 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59288 /var/tmp/spdk2.sock 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.969 10:00:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.969 [2024-11-19 10:00:45.163045] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:30.969 [2024-11-19 10:00:45.164110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:06:31.228 [2024-11-19 10:00:45.375289] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.228 [2024-11-19 10:00:45.375347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.487 [2024-11-19 10:00:45.660016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.487 [2024-11-19 10:00:45.663047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.487 [2024-11-19 10:00:45.663069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.092 10:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.092 [2024-11-19 10:00:48.001053] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59265 has claimed it. 00:06:34.092 request: 00:06:34.092 { 00:06:34.092 "method": "framework_enable_cpumask_locks", 00:06:34.092 "req_id": 1 00:06:34.092 } 00:06:34.092 Got JSON-RPC error response 00:06:34.092 response: 00:06:34.092 { 00:06:34.092 "code": -32603, 00:06:34.092 "message": "Failed to claim CPU core: 2" 00:06:34.092 } 00:06:34.092 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.092 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:34.092 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.092 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.092 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.092 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59265 ']' 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59288 /var/tmp/spdk2.sock 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.093 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.661 00:06:34.661 real 0m4.909s 00:06:34.661 user 0m1.795s 00:06:34.661 sys 0m0.266s 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.661 10:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.661 ************************************ 00:06:34.661 END TEST locking_overlapped_coremask_via_rpc 00:06:34.661 ************************************ 00:06:34.661 10:00:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.661 10:00:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59265 ]] 00:06:34.661 10:00:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59265 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59265 ']' 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59265 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59265 00:06:34.661 killing process with pid 59265 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59265' 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59265 00:06:34.661 10:00:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59265 00:06:37.195 10:00:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59288 ]] 00:06:37.195 10:00:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59288 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59288 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59288 00:06:37.195 killing process with pid 59288 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59288' 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59288 00:06:37.195 10:00:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59288 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.112 Process with pid 59265 is not found 00:06:39.112 Process with pid 59288 is not found 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59265 ]] 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59265 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59265 ']' 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59265 00:06:39.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59265) - No such process 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59265 is not found' 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59288 ]] 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59288 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59288 00:06:39.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59288) - No such process 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59288 is not found' 00:06:39.112 10:00:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.112 00:06:39.112 real 0m48.727s 00:06:39.112 user 1m25.395s 00:06:39.112 sys 0m8.289s 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.112 10:00:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.112 ************************************ 00:06:39.112 END TEST cpu_locks 00:06:39.112 ************************************ 00:06:39.406 ************************************ 00:06:39.406 END TEST event 00:06:39.406 ************************************ 00:06:39.406 00:06:39.406 real 1m20.481s 00:06:39.406 user 2m28.058s 00:06:39.406 sys 0m12.548s 00:06:39.406 10:00:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.406 10:00:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.406 10:00:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.406 10:00:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.406 10:00:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.406 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.406 ************************************ 00:06:39.406 START TEST thread 00:06:39.406 ************************************ 00:06:39.406 10:00:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.406 * Looking for test storage... 00:06:39.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.407 10:00:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.407 10:00:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.407 10:00:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.407 10:00:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.407 10:00:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.407 10:00:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.407 10:00:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.407 10:00:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.407 10:00:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.407 10:00:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.407 10:00:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.407 10:00:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:39.407 10:00:53 thread -- scripts/common.sh@345 -- # : 1 00:06:39.407 10:00:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.407 10:00:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.407 10:00:53 thread -- scripts/common.sh@365 -- # decimal 1 00:06:39.407 10:00:53 thread -- scripts/common.sh@353 -- # local d=1 00:06:39.407 10:00:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.407 10:00:53 thread -- scripts/common.sh@355 -- # echo 1 00:06:39.407 10:00:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.407 10:00:53 thread -- scripts/common.sh@366 -- # decimal 2 00:06:39.407 10:00:53 thread -- scripts/common.sh@353 -- # local d=2 00:06:39.407 10:00:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.407 10:00:53 thread -- scripts/common.sh@355 -- # echo 2 00:06:39.407 10:00:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.407 10:00:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.407 10:00:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.407 10:00:53 thread -- scripts/common.sh@368 -- # return 0 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:00:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.407 10:00:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.407 ************************************ 00:06:39.407 START TEST thread_poller_perf 00:06:39.407 ************************************ 00:06:39.407 10:00:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.666 [2024-11-19 10:00:53.669617] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:39.666 [2024-11-19 10:00:53.670585] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59483 ] 00:06:39.666 [2024-11-19 10:00:53.860830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.925 [2024-11-19 10:00:54.022866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.925 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.303 [2024-11-19T10:00:55.535Z] ====================================== 00:06:41.303 [2024-11-19T10:00:55.535Z] busy:2214756718 (cyc) 00:06:41.303 [2024-11-19T10:00:55.535Z] total_run_count: 333000 00:06:41.303 [2024-11-19T10:00:55.535Z] tsc_hz: 2200000000 (cyc) 00:06:41.303 [2024-11-19T10:00:55.535Z] ====================================== 00:06:41.303 [2024-11-19T10:00:55.535Z] poller_cost: 6650 (cyc), 3022 (nsec) 00:06:41.303 00:06:41.303 real 0m1.636s 00:06:41.303 user 0m1.406s 00:06:41.303 sys 0m0.119s 00:06:41.303 10:00:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.303 10:00:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.303 ************************************ 00:06:41.303 END TEST thread_poller_perf 00:06:41.303 ************************************ 00:06:41.303 10:00:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.303 10:00:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:41.303 10:00:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.303 10:00:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.303 ************************************ 00:06:41.303 START TEST thread_poller_perf 00:06:41.303 ************************************ 00:06:41.303 10:00:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.303 [2024-11-19 10:00:55.358327] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:41.303 [2024-11-19 10:00:55.358464] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59521 ] 00:06:41.304 [2024-11-19 10:00:55.533704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.562 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.562 [2024-11-19 10:00:55.662361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.941 [2024-11-19T10:00:57.173Z] ====================================== 00:06:42.941 [2024-11-19T10:00:57.173Z] busy:2204253108 (cyc) 00:06:42.941 [2024-11-19T10:00:57.173Z] total_run_count: 4390000 00:06:42.941 [2024-11-19T10:00:57.173Z] tsc_hz: 2200000000 (cyc) 00:06:42.941 [2024-11-19T10:00:57.173Z] ====================================== 00:06:42.941 [2024-11-19T10:00:57.173Z] poller_cost: 502 (cyc), 228 (nsec) 00:06:42.941 00:06:42.941 real 0m1.587s 00:06:42.941 user 0m1.371s 00:06:42.941 sys 0m0.107s 00:06:42.941 10:00:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.941 10:00:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.941 ************************************ 00:06:42.941 END TEST thread_poller_perf 00:06:42.941 ************************************ 00:06:42.941 10:00:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.941 00:06:42.941 real 0m3.525s 00:06:42.941 user 0m2.912s 00:06:42.941 sys 0m0.393s 00:06:42.941 ************************************ 00:06:42.941 END TEST thread 00:06:42.941 ************************************ 00:06:42.941 10:00:56 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.941 10:00:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.941 10:00:56 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:42.941 10:00:56 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.941 10:00:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.941 10:00:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.941 10:00:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.941 ************************************ 00:06:42.941 START TEST app_cmdline 00:06:42.941 ************************************ 00:06:42.941 10:00:57 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.941 * Looking for test storage... 00:06:42.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.941 10:00:57 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.941 10:00:57 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.941 10:00:57 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.201 10:00:57 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.201 10:00:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:43.201 10:00:57 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.201 10:00:57 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.201 --rc genhtml_branch_coverage=1 00:06:43.201 --rc genhtml_function_coverage=1 00:06:43.201 --rc genhtml_legend=1 00:06:43.201 --rc geninfo_all_blocks=1 00:06:43.201 --rc geninfo_unexecuted_blocks=1 00:06:43.201 00:06:43.201 ' 00:06:43.201 10:00:57 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.201 --rc genhtml_branch_coverage=1 00:06:43.201 --rc genhtml_function_coverage=1 00:06:43.201 --rc genhtml_legend=1 00:06:43.201 --rc geninfo_all_blocks=1 00:06:43.201 --rc geninfo_unexecuted_blocks=1 00:06:43.201 00:06:43.201 ' 00:06:43.201 10:00:57 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.201 --rc genhtml_branch_coverage=1 00:06:43.201 --rc genhtml_function_coverage=1 00:06:43.201 --rc genhtml_legend=1 00:06:43.201 --rc geninfo_all_blocks=1 00:06:43.201 --rc geninfo_unexecuted_blocks=1 00:06:43.201 00:06:43.201 ' 00:06:43.201 10:00:57 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.201 --rc genhtml_branch_coverage=1 00:06:43.201 --rc genhtml_function_coverage=1 00:06:43.201 --rc genhtml_legend=1 00:06:43.201 --rc geninfo_all_blocks=1 00:06:43.201 --rc geninfo_unexecuted_blocks=1 00:06:43.201 00:06:43.201 ' 00:06:43.201 10:00:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:43.201 10:00:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59610 00:06:43.201 10:00:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:43.201 10:00:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59610 00:06:43.202 10:00:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59610 ']' 00:06:43.202 10:00:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.202 10:00:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.202 10:00:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.202 10:00:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.202 10:00:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.202 [2024-11-19 10:00:57.331424] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:43.202 [2024-11-19 10:00:57.331910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:06:43.461 [2024-11-19 10:00:57.517635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.461 [2024-11-19 10:00:57.649517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.401 10:00:58 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.401 10:00:58 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:44.401 10:00:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:44.660 { 00:06:44.660 "version": "SPDK v25.01-pre git sha1 fc96810c2", 00:06:44.660 "fields": { 00:06:44.660 "major": 25, 00:06:44.660 "minor": 1, 00:06:44.660 "patch": 0, 00:06:44.660 "suffix": "-pre", 00:06:44.660 "commit": "fc96810c2" 00:06:44.660 } 00:06:44.660 } 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:44.660 10:00:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.660 10:00:58 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.661 10:00:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.661 10:00:58 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.661 10:00:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.661 10:00:58 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.661 10:00:58 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:44.661 10:00:58 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.920 request: 00:06:44.920 { 00:06:44.920 "method": "env_dpdk_get_mem_stats", 00:06:44.920 "req_id": 1 00:06:44.920 } 00:06:44.920 Got JSON-RPC error response 00:06:44.920 response: 00:06:44.920 { 00:06:44.920 "code": -32601, 00:06:44.920 "message": "Method not found" 00:06:44.920 } 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.920 10:00:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59610 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59610 ']' 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59610 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59610 00:06:44.920 killing process with pid 59610 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59610' 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 59610 00:06:44.920 10:00:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 59610 00:06:47.456 00:06:47.456 real 0m4.315s 00:06:47.456 user 0m4.620s 00:06:47.456 sys 0m0.768s 00:06:47.456 10:01:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.456 ************************************ 00:06:47.456 END TEST app_cmdline 00:06:47.456 ************************************ 00:06:47.456 10:01:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 10:01:01 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.456 10:01:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.456 10:01:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.456 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 ************************************ 00:06:47.456 START TEST version 00:06:47.456 ************************************ 00:06:47.456 10:01:01 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.456 * Looking for test storage... 00:06:47.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.456 10:01:01 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.456 10:01:01 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.456 10:01:01 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.456 10:01:01 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.456 10:01:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.456 10:01:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.456 10:01:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.457 10:01:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.457 10:01:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.457 10:01:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.457 10:01:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.457 10:01:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.457 10:01:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.457 10:01:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.457 10:01:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.457 10:01:01 version -- scripts/common.sh@344 -- # case "$op" in 00:06:47.457 10:01:01 version -- scripts/common.sh@345 -- # : 1 00:06:47.457 10:01:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.457 10:01:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.457 10:01:01 version -- scripts/common.sh@365 -- # decimal 1 00:06:47.457 10:01:01 version -- scripts/common.sh@353 -- # local d=1 00:06:47.457 10:01:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.457 10:01:01 version -- scripts/common.sh@355 -- # echo 1 00:06:47.457 10:01:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.457 10:01:01 version -- scripts/common.sh@366 -- # decimal 2 00:06:47.457 10:01:01 version -- scripts/common.sh@353 -- # local d=2 00:06:47.457 10:01:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.457 10:01:01 version -- scripts/common.sh@355 -- # echo 2 00:06:47.457 10:01:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.457 10:01:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.457 10:01:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.457 10:01:01 version -- scripts/common.sh@368 -- # return 0 00:06:47.457 10:01:01 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.457 10:01:01 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.457 --rc genhtml_branch_coverage=1 00:06:47.457 --rc genhtml_function_coverage=1 00:06:47.457 --rc genhtml_legend=1 00:06:47.457 --rc geninfo_all_blocks=1 00:06:47.457 --rc geninfo_unexecuted_blocks=1 00:06:47.457 00:06:47.457 ' 00:06:47.457 10:01:01 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.457 --rc genhtml_branch_coverage=1 00:06:47.457 --rc genhtml_function_coverage=1 00:06:47.457 --rc genhtml_legend=1 00:06:47.457 --rc geninfo_all_blocks=1 00:06:47.457 --rc geninfo_unexecuted_blocks=1 00:06:47.457 00:06:47.457 ' 00:06:47.457 10:01:01 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.457 --rc genhtml_branch_coverage=1 00:06:47.457 --rc genhtml_function_coverage=1 00:06:47.457 --rc genhtml_legend=1 00:06:47.457 --rc geninfo_all_blocks=1 00:06:47.457 --rc geninfo_unexecuted_blocks=1 00:06:47.457 00:06:47.457 ' 00:06:47.457 10:01:01 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.457 --rc genhtml_branch_coverage=1 00:06:47.457 --rc genhtml_function_coverage=1 00:06:47.457 --rc genhtml_legend=1 00:06:47.457 --rc geninfo_all_blocks=1 00:06:47.457 --rc geninfo_unexecuted_blocks=1 00:06:47.457 00:06:47.457 ' 00:06:47.457 10:01:01 version -- app/version.sh@17 -- # get_header_version major 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # cut -f2 00:06:47.457 10:01:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.457 10:01:01 version -- app/version.sh@17 -- # major=25 00:06:47.457 10:01:01 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.457 10:01:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # cut -f2 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.457 10:01:01 version -- app/version.sh@18 -- # minor=1 00:06:47.457 10:01:01 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.457 10:01:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # cut -f2 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.457 10:01:01 version -- app/version.sh@19 -- # patch=0 00:06:47.457 10:01:01 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.457 10:01:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # cut -f2 00:06:47.457 10:01:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.457 10:01:01 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.457 10:01:01 version -- app/version.sh@22 -- # version=25.1 00:06:47.457 10:01:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.457 10:01:01 version -- app/version.sh@28 -- # version=25.1rc0 00:06:47.457 10:01:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:47.457 10:01:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.457 10:01:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:47.457 10:01:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:47.457 00:06:47.457 real 0m0.241s 00:06:47.457 user 0m0.159s 00:06:47.457 sys 0m0.118s 00:06:47.457 ************************************ 00:06:47.457 END TEST version 00:06:47.457 ************************************ 00:06:47.457 10:01:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.457 10:01:01 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.457 10:01:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:47.457 10:01:01 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:47.457 10:01:01 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:47.457 10:01:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.457 10:01:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.457 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:47.457 ************************************ 00:06:47.457 START TEST bdev_raid 00:06:47.457 ************************************ 00:06:47.457 10:01:01 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:47.717 * Looking for test storage... 00:06:47.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.717 10:01:01 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.717 --rc genhtml_branch_coverage=1 00:06:47.717 --rc genhtml_function_coverage=1 00:06:47.717 --rc genhtml_legend=1 00:06:47.717 --rc geninfo_all_blocks=1 00:06:47.717 --rc geninfo_unexecuted_blocks=1 00:06:47.717 00:06:47.717 ' 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.717 --rc genhtml_branch_coverage=1 00:06:47.717 --rc genhtml_function_coverage=1 00:06:47.717 --rc genhtml_legend=1 00:06:47.717 --rc geninfo_all_blocks=1 00:06:47.717 --rc geninfo_unexecuted_blocks=1 00:06:47.717 00:06:47.717 ' 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.717 --rc genhtml_branch_coverage=1 00:06:47.717 --rc genhtml_function_coverage=1 00:06:47.717 --rc genhtml_legend=1 00:06:47.717 --rc geninfo_all_blocks=1 00:06:47.717 --rc geninfo_unexecuted_blocks=1 00:06:47.717 00:06:47.717 ' 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.717 --rc genhtml_branch_coverage=1 00:06:47.717 --rc genhtml_function_coverage=1 00:06:47.717 --rc genhtml_legend=1 00:06:47.717 --rc geninfo_all_blocks=1 00:06:47.717 --rc geninfo_unexecuted_blocks=1 00:06:47.717 00:06:47.717 ' 00:06:47.717 10:01:01 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:47.717 10:01:01 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:47.717 10:01:01 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:47.717 10:01:01 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:47.717 10:01:01 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:47.717 10:01:01 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:47.717 10:01:01 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.717 10:01:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.717 ************************************ 00:06:47.717 START TEST raid1_resize_data_offset_test 00:06:47.717 ************************************ 00:06:47.717 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59797 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59797' 00:06:47.718 Process raid pid: 59797 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59797 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59797 ']' 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.718 10:01:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.976 [2024-11-19 10:01:01.968250] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:47.976 [2024-11-19 10:01:01.968642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.976 [2024-11-19 10:01:02.141191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.235 [2024-11-19 10:01:02.276959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.493 [2024-11-19 10:01:02.491761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.493 [2024-11-19 10:01:02.491838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 malloc0 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 malloc1 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 null0 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 [2024-11-19 10:01:03.189306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:49.060 [2024-11-19 10:01:03.192078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:49.060 [2024-11-19 10:01:03.192143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:49.060 [2024-11-19 10:01:03.192387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.060 [2024-11-19 10:01:03.192410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:49.060 [2024-11-19 10:01:03.192766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.060 [2024-11-19 10:01:03.193032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.060 [2024-11-19 10:01:03.193054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.060 [2024-11-19 10:01:03.193348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.060 [2024-11-19 10:01:03.253377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.060 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.628 malloc2 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.628 [2024-11-19 10:01:03.808479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:49.628 [2024-11-19 10:01:03.825899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.628 [2024-11-19 10:01:03.828611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.628 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59797 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59797 ']' 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59797 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59797 00:06:49.887 killing process with pid 59797 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59797' 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59797 00:06:49.887 [2024-11-19 10:01:03.922643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.887 10:01:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59797 00:06:49.887 [2024-11-19 10:01:03.924160] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:49.887 [2024-11-19 10:01:03.924248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.887 [2024-11-19 10:01:03.924275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:49.887 [2024-11-19 10:01:03.956073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.887 [2024-11-19 10:01:03.956552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.887 [2024-11-19 10:01:03.956578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:51.790 [2024-11-19 10:01:05.562682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.728 ************************************ 00:06:52.728 END TEST raid1_resize_data_offset_test 00:06:52.728 ************************************ 00:06:52.728 10:01:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:52.728 00:06:52.728 real 0m4.738s 00:06:52.728 user 0m4.646s 00:06:52.728 sys 0m0.740s 00:06:52.728 10:01:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.728 10:01:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.728 10:01:06 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:52.728 10:01:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.728 10:01:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.728 10:01:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.729 ************************************ 00:06:52.729 START TEST raid0_resize_superblock_test 00:06:52.729 ************************************ 00:06:52.729 Process raid pid: 59881 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59881 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59881' 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59881 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59881 ']' 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.729 10:01:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.729 [2024-11-19 10:01:06.757032] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:52.729 [2024-11-19 10:01:06.758166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.729 [2024-11-19 10:01:06.946543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.988 [2024-11-19 10:01:07.086632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.247 [2024-11-19 10:01:07.304813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.247 [2024-11-19 10:01:07.304904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.507 10:01:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.507 10:01:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:53.507 10:01:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:53.507 10:01:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.507 10:01:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.075 malloc0 00:06:54.075 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.075 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.075 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.075 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.075 [2024-11-19 10:01:08.305670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.075 [2024-11-19 10:01:08.305782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.075 [2024-11-19 10:01:08.305853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:54.075 [2024-11-19 10:01:08.305877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.334 [2024-11-19 10:01:08.309072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.335 [2024-11-19 10:01:08.309122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.335 pt0 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.335 c8f9d12d-de86-436f-b72f-47a9f87cd44f 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.335 8d413b13-49e9-4ad6-94a3-437bccb8a6aa 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.335 fce9ddd0-ca63-43b3-854e-7e60fb110f96 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.335 [2024-11-19 10:01:08.492940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8d413b13-49e9-4ad6-94a3-437bccb8a6aa is claimed 00:06:54.335 [2024-11-19 10:01:08.493225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fce9ddd0-ca63-43b3-854e-7e60fb110f96 is claimed 00:06:54.335 [2024-11-19 10:01:08.493406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.335 [2024-11-19 10:01:08.493430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:54.335 [2024-11-19 10:01:08.493825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:54.335 [2024-11-19 10:01:08.494098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.335 [2024-11-19 10:01:08.494115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.335 [2024-11-19 10:01:08.494355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.335 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 [2024-11-19 10:01:08.617236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 [2024-11-19 10:01:08.665244] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.594 [2024-11-19 10:01:08.665277] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8d413b13-49e9-4ad6-94a3-437bccb8a6aa' was resized: old size 131072, new size 204800 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 [2024-11-19 10:01:08.673102] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.594 [2024-11-19 10:01:08.673145] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fce9ddd0-ca63-43b3-854e-7e60fb110f96' was resized: old size 131072, new size 204800 00:06:54.594 [2024-11-19 10:01:08.673239] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:54.594 [2024-11-19 10:01:08.789291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.594 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.854 [2024-11-19 10:01:08.833035] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:54.854 [2024-11-19 10:01:08.833330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:54.854 [2024-11-19 10:01:08.833361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.854 [2024-11-19 10:01:08.833387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:54.854 [2024-11-19 10:01:08.833570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.854 [2024-11-19 10:01:08.833643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.854 [2024-11-19 10:01:08.833663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.854 [2024-11-19 10:01:08.840985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.854 [2024-11-19 10:01:08.841075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.854 [2024-11-19 10:01:08.841108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:54.854 [2024-11-19 10:01:08.841155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.854 [2024-11-19 10:01:08.844510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.854 [2024-11-19 10:01:08.844703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.854 pt0 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.854 [2024-11-19 10:01:08.847714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8d413b13-49e9-4ad6-94a3-437bccb8a6aa 00:06:54.854 [2024-11-19 10:01:08.847956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8d413b13-49e9-4ad6-94a3-437bccb8a6aa is claimed 00:06:54.854 [2024-11-19 10:01:08.848115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fce9ddd0-ca63-43b3-854e-7e60fb110f96 00:06:54.854 [2024-11-19 10:01:08.848152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fce9ddd0-ca63-43b3-854e-7e60fb110f96 is claimed 00:06:54.854 [2024-11-19 10:01:08.848311] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fce9ddd0-ca63-43b3-854e-7e60fb110f96 (2) smaller than existing raid bdev Raid (3) 00:06:54.854 [2024-11-19 10:01:08.848371] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8d413b13-49e9-4ad6-94a3-437bccb8a6aa: File exists 00:06:54.854 [2024-11-19 10:01:08.848434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:54.854 [2024-11-19 10:01:08.848455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:54.854 [2024-11-19 10:01:08.848815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:54.854 [2024-11-19 10:01:08.849042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:54.854 [2024-11-19 10:01:08.849064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:54.854 [2024-11-19 10:01:08.849397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.854 [2024-11-19 10:01:08.861487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59881 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59881 ']' 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59881 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59881 00:06:54.854 killing process with pid 59881 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59881' 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59881 00:06:54.854 [2024-11-19 10:01:08.939390] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.854 [2024-11-19 10:01:08.939455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.854 10:01:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59881 00:06:54.854 [2024-11-19 10:01:08.939505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.854 [2024-11-19 10:01:08.939519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:56.228 [2024-11-19 10:01:10.315654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.166 ************************************ 00:06:57.166 END TEST raid0_resize_superblock_test 00:06:57.166 ************************************ 00:06:57.166 10:01:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:57.166 00:06:57.166 real 0m4.720s 00:06:57.166 user 0m4.910s 00:06:57.166 sys 0m0.736s 00:06:57.166 10:01:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.166 10:01:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.426 10:01:11 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:57.426 10:01:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.426 10:01:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.426 10:01:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.426 ************************************ 00:06:57.426 START TEST raid1_resize_superblock_test 00:06:57.426 ************************************ 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:57.426 Process raid pid: 59988 00:06:57.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59988 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59988' 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59988 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59988 ']' 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.426 10:01:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.426 [2024-11-19 10:01:11.553195] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:06:57.426 [2024-11-19 10:01:11.553643] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.685 [2024-11-19 10:01:11.739772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.685 [2024-11-19 10:01:11.871324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.945 [2024-11-19 10:01:12.088058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.945 [2024-11-19 10:01:12.088117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.514 10:01:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.514 10:01:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:58.514 10:01:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:58.514 10:01:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.514 10:01:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 malloc0 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 [2024-11-19 10:01:13.063670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.085 [2024-11-19 10:01:13.063767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.085 [2024-11-19 10:01:13.063826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:59.085 [2024-11-19 10:01:13.063853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.085 [2024-11-19 10:01:13.067137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.085 [2024-11-19 10:01:13.067374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.085 pt0 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 baca6c2f-2f6a-4b58-a0a5-0c21bed3eafc 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 2e9f98ec-aa11-4328-ab21-6f93f5bb5b81 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 2db83004-4056-4842-b7e9-c60d22a473ae 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 [2024-11-19 10:01:13.257331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e9f98ec-aa11-4328-ab21-6f93f5bb5b81 is claimed 00:06:59.085 [2024-11-19 10:01:13.257441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2db83004-4056-4842-b7e9-c60d22a473ae is claimed 00:06:59.085 [2024-11-19 10:01:13.257627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:59.085 [2024-11-19 10:01:13.257651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:59.085 [2024-11-19 10:01:13.258053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:59.085 [2024-11-19 10:01:13.258323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:59.085 [2024-11-19 10:01:13.258347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:59.085 [2024-11-19 10:01:13.258569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.085 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 [2024-11-19 10:01:13.377576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.344 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 [2024-11-19 10:01:13.425575] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.345 [2024-11-19 10:01:13.425606] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2e9f98ec-aa11-4328-ab21-6f93f5bb5b81' was resized: old size 131072, new size 204800 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.345 [2024-11-19 10:01:13.433549] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.345 [2024-11-19 10:01:13.433577] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2db83004-4056-4842-b7e9-c60d22a473ae' was resized: old size 131072, new size 204800 00:06:59.345 [2024-11-19 10:01:13.433615] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.345 [2024-11-19 10:01:13.549612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.345 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.605 [2024-11-19 10:01:13.601403] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:59.605 [2024-11-19 10:01:13.601501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:59.605 [2024-11-19 10:01:13.601539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:59.605 [2024-11-19 10:01:13.601711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.605 [2024-11-19 10:01:13.601994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.605 [2024-11-19 10:01:13.602100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.605 [2024-11-19 10:01:13.602124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.605 [2024-11-19 10:01:13.609354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.605 [2024-11-19 10:01:13.609574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.605 [2024-11-19 10:01:13.609645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:59.605 [2024-11-19 10:01:13.609872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.605 [2024-11-19 10:01:13.612976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.605 [2024-11-19 10:01:13.613164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.605 pt0 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.605 [2024-11-19 10:01:13.615644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2e9f98ec-aa11-4328-ab21-6f93f5bb5b81 00:06:59.605 [2024-11-19 10:01:13.615901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e9f98ec-aa11-4328-ab21-6f93f5bb5b81 is claimed 00:06:59.605 [2024-11-19 10:01:13.616069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2db83004-4056-4842-b7e9-c60d22a473ae 00:06:59.605 [2024-11-19 10:01:13.616105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2db83004-4056-4842-b7e9-c60d22a473ae is claimed 00:06:59.605 [2024-11-19 10:01:13.616280] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2db83004-4056-4842-b7e9-c60d22a473ae (2) smaller than existing raid bdev Raid (3) 00:06:59.605 [2024-11-19 10:01:13.616310] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2e9f98ec-aa11-4328-ab21-6f93f5bb5b81: File exists 00:06:59.605 [2024-11-19 10:01:13.616389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:59.605 [2024-11-19 10:01:13.616410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:59.605 [2024-11-19 10:01:13.616805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:59.605 [2024-11-19 10:01:13.617054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:59.605 [2024-11-19 10:01:13.617071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:59.605 [2024-11-19 10:01:13.617278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.605 [2024-11-19 10:01:13.629661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59988 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59988 ']' 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59988 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59988 00:06:59.605 killing process with pid 59988 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59988' 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59988 00:06:59.605 [2024-11-19 10:01:13.703368] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.605 [2024-11-19 10:01:13.703429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.605 10:01:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59988 00:06:59.605 [2024-11-19 10:01:13.703483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.605 [2024-11-19 10:01:13.703495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:00.984 [2024-11-19 10:01:15.020831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.922 ************************************ 00:07:01.922 END TEST raid1_resize_superblock_test 00:07:01.922 ************************************ 00:07:01.922 10:01:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:01.922 00:07:01.922 real 0m4.647s 00:07:01.922 user 0m4.851s 00:07:01.922 sys 0m0.741s 00:07:01.922 10:01:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.922 10:01:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.922 10:01:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:01.922 10:01:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:01.922 10:01:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:01.922 10:01:16 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:01.922 10:01:16 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:01.922 10:01:16 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:01.922 10:01:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.922 10:01:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.922 10:01:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.181 ************************************ 00:07:02.181 START TEST raid_function_test_raid0 00:07:02.181 ************************************ 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:02.181 Process raid pid: 60085 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60085 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60085' 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60085 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60085 ']' 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.181 10:01:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:02.181 [2024-11-19 10:01:16.275022] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:02.181 [2024-11-19 10:01:16.275225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.440 [2024-11-19 10:01:16.453587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.440 [2024-11-19 10:01:16.587466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.699 [2024-11-19 10:01:16.800070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.699 [2024-11-19 10:01:16.800120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.304 Base_1 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.304 Base_2 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.304 [2024-11-19 10:01:17.327743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:03.304 [2024-11-19 10:01:17.330280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:03.304 [2024-11-19 10:01:17.330363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:03.304 [2024-11-19 10:01:17.330382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.304 [2024-11-19 10:01:17.330656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.304 [2024-11-19 10:01:17.330895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:03.304 [2024-11-19 10:01:17.330911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:03.304 [2024-11-19 10:01:17.331076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.304 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:03.305 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.305 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:03.305 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.305 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:03.305 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:03.564 [2024-11-19 10:01:17.675917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:03.564 /dev/nbd0 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.564 1+0 records in 00:07:03.564 1+0 records out 00:07:03.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317619 s, 12.9 MB/s 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.564 10:01:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:03.823 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.823 { 00:07:03.823 "nbd_device": "/dev/nbd0", 00:07:03.823 "bdev_name": "raid" 00:07:03.823 } 00:07:03.823 ]' 00:07:03.823 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.823 { 00:07:03.823 "nbd_device": "/dev/nbd0", 00:07:03.823 "bdev_name": "raid" 00:07:03.823 } 00:07:03.823 ]' 00:07:03.823 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:04.083 4096+0 records in 00:07:04.083 4096+0 records out 00:07:04.083 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0246552 s, 85.1 MB/s 00:07:04.083 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:04.342 4096+0 records in 00:07:04.342 4096+0 records out 00:07:04.342 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.358203 s, 5.9 MB/s 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:04.342 128+0 records in 00:07:04.342 128+0 records out 00:07:04.342 65536 bytes (66 kB, 64 KiB) copied, 0.0010094 s, 64.9 MB/s 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:04.342 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:04.343 2035+0 records in 00:07:04.343 2035+0 records out 00:07:04.343 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00683854 s, 152 MB/s 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:04.343 456+0 records in 00:07:04.343 456+0 records out 00:07:04.343 233472 bytes (233 kB, 228 KiB) copied, 0.00328965 s, 71.0 MB/s 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.343 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.911 [2024-11-19 10:01:18.884090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.911 10:01:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60085 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60085 ']' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60085 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60085 00:07:05.170 killing process with pid 60085 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60085' 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60085 00:07:05.170 10:01:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60085 00:07:05.170 [2024-11-19 10:01:19.299843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.170 [2024-11-19 10:01:19.300003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.170 [2024-11-19 10:01:19.300131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.170 [2024-11-19 10:01:19.300163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:05.429 [2024-11-19 10:01:19.479495] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.367 ************************************ 00:07:06.367 END TEST raid_function_test_raid0 00:07:06.367 ************************************ 00:07:06.367 10:01:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:06.367 00:07:06.367 real 0m4.367s 00:07:06.367 user 0m5.337s 00:07:06.367 sys 0m1.072s 00:07:06.367 10:01:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.367 10:01:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:06.367 10:01:20 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:06.367 10:01:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.367 10:01:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.367 10:01:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.367 ************************************ 00:07:06.367 START TEST raid_function_test_concat 00:07:06.367 ************************************ 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60220 00:07:06.367 Process raid pid: 60220 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60220' 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60220 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60220 ']' 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.367 10:01:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:06.626 [2024-11-19 10:01:20.683772] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:06.627 [2024-11-19 10:01:20.684029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.627 [2024-11-19 10:01:20.857774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.886 [2024-11-19 10:01:20.989089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.144 [2024-11-19 10:01:21.213422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.144 [2024-11-19 10:01:21.213775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.403 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.403 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:07.403 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:07.403 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.403 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.662 Base_1 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.662 Base_2 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.662 [2024-11-19 10:01:21.715287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.662 [2024-11-19 10:01:21.718150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.662 [2024-11-19 10:01:21.718255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.662 [2024-11-19 10:01:21.718275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.662 [2024-11-19 10:01:21.718578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.662 [2024-11-19 10:01:21.718761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.662 [2024-11-19 10:01:21.718776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:07.662 [2024-11-19 10:01:21.719197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.662 10:01:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:07.921 [2024-11-19 10:01:22.031538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:07.921 /dev/nbd0 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.921 1+0 records in 00:07:07.921 1+0 records out 00:07:07.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410474 s, 10.0 MB/s 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.921 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:08.180 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.180 { 00:07:08.180 "nbd_device": "/dev/nbd0", 00:07:08.180 "bdev_name": "raid" 00:07:08.180 } 00:07:08.180 ]' 00:07:08.180 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.180 { 00:07:08.180 "nbd_device": "/dev/nbd0", 00:07:08.180 "bdev_name": "raid" 00:07:08.180 } 00:07:08.180 ]' 00:07:08.180 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:08.438 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:08.439 4096+0 records in 00:07:08.439 4096+0 records out 00:07:08.439 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0301613 s, 69.5 MB/s 00:07:08.439 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:08.697 4096+0 records in 00:07:08.697 4096+0 records out 00:07:08.697 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.328653 s, 6.4 MB/s 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:08.697 128+0 records in 00:07:08.697 128+0 records out 00:07:08.697 65536 bytes (66 kB, 64 KiB) copied, 0.00106545 s, 61.5 MB/s 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:08.697 2035+0 records in 00:07:08.697 2035+0 records out 00:07:08.697 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00817814 s, 127 MB/s 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:08.697 456+0 records in 00:07:08.697 456+0 records out 00:07:08.697 233472 bytes (233 kB, 228 KiB) copied, 0.00262598 s, 88.9 MB/s 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.697 10:01:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.264 [2024-11-19 10:01:23.240681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.264 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60220 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60220 ']' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60220 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60220 00:07:09.523 killing process with pid 60220 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60220' 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60220 00:07:09.523 10:01:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60220 00:07:09.523 [2024-11-19 10:01:23.596426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.523 [2024-11-19 10:01:23.596562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.523 [2024-11-19 10:01:23.596680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.523 [2024-11-19 10:01:23.596897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:09.523 [2024-11-19 10:01:23.751761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.941 ************************************ 00:07:10.941 END TEST raid_function_test_concat 00:07:10.941 ************************************ 00:07:10.941 10:01:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:10.941 00:07:10.941 real 0m4.168s 00:07:10.941 user 0m5.011s 00:07:10.941 sys 0m1.072s 00:07:10.941 10:01:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.941 10:01:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:10.941 10:01:24 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:10.941 10:01:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.941 10:01:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.941 10:01:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.941 ************************************ 00:07:10.941 START TEST raid0_resize_test 00:07:10.941 ************************************ 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:10.941 Process raid pid: 60348 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60348 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60348' 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60348 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60348 ']' 00:07:10.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.941 10:01:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.941 [2024-11-19 10:01:24.924303] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:10.941 [2024-11-19 10:01:24.924515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.941 [2024-11-19 10:01:25.113172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.201 [2024-11-19 10:01:25.255263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.461 [2024-11-19 10:01:25.467936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.461 [2024-11-19 10:01:25.467980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.721 Base_1 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.721 Base_2 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.721 [2024-11-19 10:01:25.935673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:11.721 [2024-11-19 10:01:25.938298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:11.721 [2024-11-19 10:01:25.938365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:11.721 [2024-11-19 10:01:25.938383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:11.721 [2024-11-19 10:01:25.938645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:11.721 [2024-11-19 10:01:25.938806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:11.721 [2024-11-19 10:01:25.938836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:11.721 [2024-11-19 10:01:25.938985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.721 [2024-11-19 10:01:25.943634] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:11.721 [2024-11-19 10:01:25.943852] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:11.721 true 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.721 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.981 [2024-11-19 10:01:25.955934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.981 10:01:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.981 [2024-11-19 10:01:26.011627] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:11.981 [2024-11-19 10:01:26.011651] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:11.981 [2024-11-19 10:01:26.011701] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:11.981 true 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.981 [2024-11-19 10:01:26.023863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60348 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60348 ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60348 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60348 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60348' 00:07:11.981 killing process with pid 60348 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60348 00:07:11.981 [2024-11-19 10:01:26.107296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.981 10:01:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60348 00:07:11.981 [2024-11-19 10:01:26.107390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.981 [2024-11-19 10:01:26.107452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.981 [2024-11-19 10:01:26.107467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:11.981 [2024-11-19 10:01:26.123087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.366 ************************************ 00:07:13.366 END TEST raid0_resize_test 00:07:13.366 ************************************ 00:07:13.366 10:01:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:13.366 00:07:13.366 real 0m2.353s 00:07:13.366 user 0m2.547s 00:07:13.366 sys 0m0.456s 00:07:13.366 10:01:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.366 10:01:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.366 10:01:27 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:13.366 10:01:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.366 10:01:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.366 10:01:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.366 ************************************ 00:07:13.366 START TEST raid1_resize_test 00:07:13.366 ************************************ 00:07:13.366 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:13.367 Process raid pid: 60409 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60409 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60409' 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60409 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60409 ']' 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.367 10:01:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.367 [2024-11-19 10:01:27.334582] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:13.367 [2024-11-19 10:01:27.334820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.367 [2024-11-19 10:01:27.517919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.626 [2024-11-19 10:01:27.653102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.886 [2024-11-19 10:01:27.863217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.886 [2024-11-19 10:01:27.863270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.145 Base_1 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.145 Base_2 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.145 [2024-11-19 10:01:28.338847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:14.145 [2024-11-19 10:01:28.341329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:14.145 [2024-11-19 10:01:28.341403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:14.145 [2024-11-19 10:01:28.341421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:14.145 [2024-11-19 10:01:28.341689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.145 [2024-11-19 10:01:28.341894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:14.145 [2024-11-19 10:01:28.341910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:14.145 [2024-11-19 10:01:28.342070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.145 [2024-11-19 10:01:28.346832] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.145 [2024-11-19 10:01:28.346866] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:14.145 true 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:14.145 [2024-11-19 10:01:28.359043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.145 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.404 [2024-11-19 10:01:28.410826] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.404 [2024-11-19 10:01:28.410865] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:14.404 [2024-11-19 10:01:28.410901] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:14.404 true 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:14.404 [2024-11-19 10:01:28.423066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60409 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60409 ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60409 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60409 00:07:14.404 killing process with pid 60409 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60409' 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60409 00:07:14.404 [2024-11-19 10:01:28.504844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.404 10:01:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60409 00:07:14.404 [2024-11-19 10:01:28.504944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.404 [2024-11-19 10:01:28.505657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.404 [2024-11-19 10:01:28.505686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:14.404 [2024-11-19 10:01:28.522952] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.783 10:01:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:15.783 00:07:15.783 real 0m2.383s 00:07:15.783 user 0m2.576s 00:07:15.783 sys 0m0.449s 00:07:15.783 10:01:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.783 10:01:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.783 ************************************ 00:07:15.783 END TEST raid1_resize_test 00:07:15.783 ************************************ 00:07:15.783 10:01:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:15.783 10:01:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:15.783 10:01:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:15.783 10:01:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:15.783 10:01:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.783 10:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.783 ************************************ 00:07:15.783 START TEST raid_state_function_test 00:07:15.783 ************************************ 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:15.783 Process raid pid: 60472 00:07:15.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60472 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60472' 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60472 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60472 ']' 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.783 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.784 10:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.784 [2024-11-19 10:01:29.794026] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:15.784 [2024-11-19 10:01:29.794576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.784 [2024-11-19 10:01:29.980231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.043 [2024-11-19 10:01:30.125692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.303 [2024-11-19 10:01:30.345035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.303 [2024-11-19 10:01:30.345090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 [2024-11-19 10:01:30.720507] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.564 [2024-11-19 10:01:30.720574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.564 [2024-11-19 10:01:30.720592] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.564 [2024-11-19 10:01:30.720636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.564 "name": "Existed_Raid", 00:07:16.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.564 "strip_size_kb": 64, 00:07:16.564 "state": "configuring", 00:07:16.564 "raid_level": "raid0", 00:07:16.564 "superblock": false, 00:07:16.564 "num_base_bdevs": 2, 00:07:16.564 "num_base_bdevs_discovered": 0, 00:07:16.564 "num_base_bdevs_operational": 2, 00:07:16.564 "base_bdevs_list": [ 00:07:16.564 { 00:07:16.564 "name": "BaseBdev1", 00:07:16.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.564 "is_configured": false, 00:07:16.564 "data_offset": 0, 00:07:16.564 "data_size": 0 00:07:16.564 }, 00:07:16.564 { 00:07:16.564 "name": "BaseBdev2", 00:07:16.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.564 "is_configured": false, 00:07:16.564 "data_offset": 0, 00:07:16.564 "data_size": 0 00:07:16.564 } 00:07:16.564 ] 00:07:16.564 }' 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.564 10:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 [2024-11-19 10:01:31.240648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.136 [2024-11-19 10:01:31.240692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 [2024-11-19 10:01:31.248611] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.136 [2024-11-19 10:01:31.248707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.136 [2024-11-19 10:01:31.248720] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.136 [2024-11-19 10:01:31.248738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 [2024-11-19 10:01:31.292939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.136 BaseBdev1 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.136 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 [ 00:07:17.136 { 00:07:17.136 "name": "BaseBdev1", 00:07:17.136 "aliases": [ 00:07:17.136 "0b2f39b6-9313-40d9-a114-cb86a91da837" 00:07:17.136 ], 00:07:17.136 "product_name": "Malloc disk", 00:07:17.136 "block_size": 512, 00:07:17.136 "num_blocks": 65536, 00:07:17.136 "uuid": "0b2f39b6-9313-40d9-a114-cb86a91da837", 00:07:17.136 "assigned_rate_limits": { 00:07:17.136 "rw_ios_per_sec": 0, 00:07:17.136 "rw_mbytes_per_sec": 0, 00:07:17.136 "r_mbytes_per_sec": 0, 00:07:17.136 "w_mbytes_per_sec": 0 00:07:17.136 }, 00:07:17.136 "claimed": true, 00:07:17.136 "claim_type": "exclusive_write", 00:07:17.136 "zoned": false, 00:07:17.136 "supported_io_types": { 00:07:17.136 "read": true, 00:07:17.136 "write": true, 00:07:17.136 "unmap": true, 00:07:17.136 "flush": true, 00:07:17.136 "reset": true, 00:07:17.136 "nvme_admin": false, 00:07:17.136 "nvme_io": false, 00:07:17.136 "nvme_io_md": false, 00:07:17.136 "write_zeroes": true, 00:07:17.136 "zcopy": true, 00:07:17.136 "get_zone_info": false, 00:07:17.136 "zone_management": false, 00:07:17.136 "zone_append": false, 00:07:17.137 "compare": false, 00:07:17.137 "compare_and_write": false, 00:07:17.137 "abort": true, 00:07:17.137 "seek_hole": false, 00:07:17.137 "seek_data": false, 00:07:17.137 "copy": true, 00:07:17.137 "nvme_iov_md": false 00:07:17.137 }, 00:07:17.137 "memory_domains": [ 00:07:17.137 { 00:07:17.137 "dma_device_id": "system", 00:07:17.137 "dma_device_type": 1 00:07:17.137 }, 00:07:17.137 { 00:07:17.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.137 "dma_device_type": 2 00:07:17.137 } 00:07:17.137 ], 00:07:17.137 "driver_specific": {} 00:07:17.137 } 00:07:17.137 ] 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.137 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.395 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.396 "name": "Existed_Raid", 00:07:17.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.396 "strip_size_kb": 64, 00:07:17.396 "state": "configuring", 00:07:17.396 "raid_level": "raid0", 00:07:17.396 "superblock": false, 00:07:17.396 "num_base_bdevs": 2, 00:07:17.396 "num_base_bdevs_discovered": 1, 00:07:17.396 "num_base_bdevs_operational": 2, 00:07:17.396 "base_bdevs_list": [ 00:07:17.396 { 00:07:17.396 "name": "BaseBdev1", 00:07:17.396 "uuid": "0b2f39b6-9313-40d9-a114-cb86a91da837", 00:07:17.396 "is_configured": true, 00:07:17.396 "data_offset": 0, 00:07:17.396 "data_size": 65536 00:07:17.396 }, 00:07:17.396 { 00:07:17.396 "name": "BaseBdev2", 00:07:17.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.396 "is_configured": false, 00:07:17.396 "data_offset": 0, 00:07:17.396 "data_size": 0 00:07:17.396 } 00:07:17.396 ] 00:07:17.396 }' 00:07:17.396 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.396 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 [2024-11-19 10:01:31.825184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.654 [2024-11-19 10:01:31.825248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 [2024-11-19 10:01:31.833221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.654 [2024-11-19 10:01:31.835753] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.654 [2024-11-19 10:01:31.836009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.913 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.913 "name": "Existed_Raid", 00:07:17.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.913 "strip_size_kb": 64, 00:07:17.913 "state": "configuring", 00:07:17.913 "raid_level": "raid0", 00:07:17.913 "superblock": false, 00:07:17.913 "num_base_bdevs": 2, 00:07:17.913 "num_base_bdevs_discovered": 1, 00:07:17.913 "num_base_bdevs_operational": 2, 00:07:17.913 "base_bdevs_list": [ 00:07:17.913 { 00:07:17.913 "name": "BaseBdev1", 00:07:17.913 "uuid": "0b2f39b6-9313-40d9-a114-cb86a91da837", 00:07:17.913 "is_configured": true, 00:07:17.913 "data_offset": 0, 00:07:17.913 "data_size": 65536 00:07:17.913 }, 00:07:17.913 { 00:07:17.913 "name": "BaseBdev2", 00:07:17.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.913 "is_configured": false, 00:07:17.913 "data_offset": 0, 00:07:17.913 "data_size": 0 00:07:17.913 } 00:07:17.913 ] 00:07:17.913 }' 00:07:17.913 10:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.913 10:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.173 [2024-11-19 10:01:32.389108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.173 [2024-11-19 10:01:32.389161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.173 [2024-11-19 10:01:32.389174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:18.173 [2024-11-19 10:01:32.389522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:18.173 [2024-11-19 10:01:32.389717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.173 [2024-11-19 10:01:32.389754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:18.173 [2024-11-19 10:01:32.390085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.173 BaseBdev2 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.173 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.432 [ 00:07:18.432 { 00:07:18.432 "name": "BaseBdev2", 00:07:18.432 "aliases": [ 00:07:18.432 "15915a65-4dd6-4952-bd0e-3fd8e4a5b7f1" 00:07:18.432 ], 00:07:18.432 "product_name": "Malloc disk", 00:07:18.432 "block_size": 512, 00:07:18.432 "num_blocks": 65536, 00:07:18.432 "uuid": "15915a65-4dd6-4952-bd0e-3fd8e4a5b7f1", 00:07:18.432 "assigned_rate_limits": { 00:07:18.432 "rw_ios_per_sec": 0, 00:07:18.432 "rw_mbytes_per_sec": 0, 00:07:18.432 "r_mbytes_per_sec": 0, 00:07:18.432 "w_mbytes_per_sec": 0 00:07:18.432 }, 00:07:18.432 "claimed": true, 00:07:18.432 "claim_type": "exclusive_write", 00:07:18.432 "zoned": false, 00:07:18.432 "supported_io_types": { 00:07:18.432 "read": true, 00:07:18.432 "write": true, 00:07:18.432 "unmap": true, 00:07:18.432 "flush": true, 00:07:18.432 "reset": true, 00:07:18.432 "nvme_admin": false, 00:07:18.432 "nvme_io": false, 00:07:18.432 "nvme_io_md": false, 00:07:18.432 "write_zeroes": true, 00:07:18.432 "zcopy": true, 00:07:18.432 "get_zone_info": false, 00:07:18.432 "zone_management": false, 00:07:18.432 "zone_append": false, 00:07:18.432 "compare": false, 00:07:18.432 "compare_and_write": false, 00:07:18.432 "abort": true, 00:07:18.432 "seek_hole": false, 00:07:18.432 "seek_data": false, 00:07:18.432 "copy": true, 00:07:18.432 "nvme_iov_md": false 00:07:18.432 }, 00:07:18.432 "memory_domains": [ 00:07:18.432 { 00:07:18.432 "dma_device_id": "system", 00:07:18.432 "dma_device_type": 1 00:07:18.432 }, 00:07:18.432 { 00:07:18.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.432 "dma_device_type": 2 00:07:18.432 } 00:07:18.432 ], 00:07:18.432 "driver_specific": {} 00:07:18.432 } 00:07:18.432 ] 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.432 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.433 "name": "Existed_Raid", 00:07:18.433 "uuid": "14561fc8-5e84-42d8-9d48-6c4b1384be56", 00:07:18.433 "strip_size_kb": 64, 00:07:18.433 "state": "online", 00:07:18.433 "raid_level": "raid0", 00:07:18.433 "superblock": false, 00:07:18.433 "num_base_bdevs": 2, 00:07:18.433 "num_base_bdevs_discovered": 2, 00:07:18.433 "num_base_bdevs_operational": 2, 00:07:18.433 "base_bdevs_list": [ 00:07:18.433 { 00:07:18.433 "name": "BaseBdev1", 00:07:18.433 "uuid": "0b2f39b6-9313-40d9-a114-cb86a91da837", 00:07:18.433 "is_configured": true, 00:07:18.433 "data_offset": 0, 00:07:18.433 "data_size": 65536 00:07:18.433 }, 00:07:18.433 { 00:07:18.433 "name": "BaseBdev2", 00:07:18.433 "uuid": "15915a65-4dd6-4952-bd0e-3fd8e4a5b7f1", 00:07:18.433 "is_configured": true, 00:07:18.433 "data_offset": 0, 00:07:18.433 "data_size": 65536 00:07:18.433 } 00:07:18.433 ] 00:07:18.433 }' 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.433 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.001 [2024-11-19 10:01:32.949821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.001 "name": "Existed_Raid", 00:07:19.001 "aliases": [ 00:07:19.001 "14561fc8-5e84-42d8-9d48-6c4b1384be56" 00:07:19.001 ], 00:07:19.001 "product_name": "Raid Volume", 00:07:19.001 "block_size": 512, 00:07:19.001 "num_blocks": 131072, 00:07:19.001 "uuid": "14561fc8-5e84-42d8-9d48-6c4b1384be56", 00:07:19.001 "assigned_rate_limits": { 00:07:19.001 "rw_ios_per_sec": 0, 00:07:19.001 "rw_mbytes_per_sec": 0, 00:07:19.001 "r_mbytes_per_sec": 0, 00:07:19.001 "w_mbytes_per_sec": 0 00:07:19.001 }, 00:07:19.001 "claimed": false, 00:07:19.001 "zoned": false, 00:07:19.001 "supported_io_types": { 00:07:19.001 "read": true, 00:07:19.001 "write": true, 00:07:19.001 "unmap": true, 00:07:19.001 "flush": true, 00:07:19.001 "reset": true, 00:07:19.001 "nvme_admin": false, 00:07:19.001 "nvme_io": false, 00:07:19.001 "nvme_io_md": false, 00:07:19.001 "write_zeroes": true, 00:07:19.001 "zcopy": false, 00:07:19.001 "get_zone_info": false, 00:07:19.001 "zone_management": false, 00:07:19.001 "zone_append": false, 00:07:19.001 "compare": false, 00:07:19.001 "compare_and_write": false, 00:07:19.001 "abort": false, 00:07:19.001 "seek_hole": false, 00:07:19.001 "seek_data": false, 00:07:19.001 "copy": false, 00:07:19.001 "nvme_iov_md": false 00:07:19.001 }, 00:07:19.001 "memory_domains": [ 00:07:19.001 { 00:07:19.001 "dma_device_id": "system", 00:07:19.001 "dma_device_type": 1 00:07:19.001 }, 00:07:19.001 { 00:07:19.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.001 "dma_device_type": 2 00:07:19.001 }, 00:07:19.001 { 00:07:19.001 "dma_device_id": "system", 00:07:19.001 "dma_device_type": 1 00:07:19.001 }, 00:07:19.001 { 00:07:19.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.001 "dma_device_type": 2 00:07:19.001 } 00:07:19.001 ], 00:07:19.001 "driver_specific": { 00:07:19.001 "raid": { 00:07:19.001 "uuid": "14561fc8-5e84-42d8-9d48-6c4b1384be56", 00:07:19.001 "strip_size_kb": 64, 00:07:19.001 "state": "online", 00:07:19.001 "raid_level": "raid0", 00:07:19.001 "superblock": false, 00:07:19.001 "num_base_bdevs": 2, 00:07:19.001 "num_base_bdevs_discovered": 2, 00:07:19.001 "num_base_bdevs_operational": 2, 00:07:19.001 "base_bdevs_list": [ 00:07:19.001 { 00:07:19.001 "name": "BaseBdev1", 00:07:19.001 "uuid": "0b2f39b6-9313-40d9-a114-cb86a91da837", 00:07:19.001 "is_configured": true, 00:07:19.001 "data_offset": 0, 00:07:19.001 "data_size": 65536 00:07:19.001 }, 00:07:19.001 { 00:07:19.001 "name": "BaseBdev2", 00:07:19.001 "uuid": "15915a65-4dd6-4952-bd0e-3fd8e4a5b7f1", 00:07:19.001 "is_configured": true, 00:07:19.001 "data_offset": 0, 00:07:19.001 "data_size": 65536 00:07:19.001 } 00:07:19.001 ] 00:07:19.001 } 00:07:19.001 } 00:07:19.001 }' 00:07:19.001 10:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:19.001 BaseBdev2' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.001 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.001 [2024-11-19 10:01:33.205533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:19.001 [2024-11-19 10:01:33.205766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.001 [2024-11-19 10:01:33.205895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.260 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.260 "name": "Existed_Raid", 00:07:19.260 "uuid": "14561fc8-5e84-42d8-9d48-6c4b1384be56", 00:07:19.260 "strip_size_kb": 64, 00:07:19.260 "state": "offline", 00:07:19.260 "raid_level": "raid0", 00:07:19.260 "superblock": false, 00:07:19.261 "num_base_bdevs": 2, 00:07:19.261 "num_base_bdevs_discovered": 1, 00:07:19.261 "num_base_bdevs_operational": 1, 00:07:19.261 "base_bdevs_list": [ 00:07:19.261 { 00:07:19.261 "name": null, 00:07:19.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.261 "is_configured": false, 00:07:19.261 "data_offset": 0, 00:07:19.261 "data_size": 65536 00:07:19.261 }, 00:07:19.261 { 00:07:19.261 "name": "BaseBdev2", 00:07:19.261 "uuid": "15915a65-4dd6-4952-bd0e-3fd8e4a5b7f1", 00:07:19.261 "is_configured": true, 00:07:19.261 "data_offset": 0, 00:07:19.261 "data_size": 65536 00:07:19.261 } 00:07:19.261 ] 00:07:19.261 }' 00:07:19.261 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.261 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.828 [2024-11-19 10:01:33.884398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.828 [2024-11-19 10:01:33.884606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.828 10:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60472 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60472 ']' 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60472 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.828 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60472 00:07:20.087 killing process with pid 60472 00:07:20.087 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.087 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.087 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60472' 00:07:20.087 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60472 00:07:20.087 [2024-11-19 10:01:34.065238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.087 10:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60472 00:07:20.087 [2024-11-19 10:01:34.080765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.022 00:07:21.022 real 0m5.470s 00:07:21.022 user 0m8.185s 00:07:21.022 sys 0m0.838s 00:07:21.022 ************************************ 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.022 END TEST raid_state_function_test 00:07:21.022 ************************************ 00:07:21.022 10:01:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:21.022 10:01:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.022 10:01:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.022 10:01:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.022 ************************************ 00:07:21.022 START TEST raid_state_function_test_sb 00:07:21.022 ************************************ 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.022 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60725 00:07:21.023 Process raid pid: 60725 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60725' 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60725 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60725 ']' 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.023 10:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.282 [2024-11-19 10:01:35.304400] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:21.282 [2024-11-19 10:01:35.304568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.282 [2024-11-19 10:01:35.484740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.541 [2024-11-19 10:01:35.631549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.800 [2024-11-19 10:01:35.856199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.800 [2024-11-19 10:01:35.856527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.058 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.058 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:22.058 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.058 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.058 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.058 [2024-11-19 10:01:36.284841] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.058 [2024-11-19 10:01:36.284931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.058 [2024-11-19 10:01:36.284950] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.058 [2024-11-19 10:01:36.284968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.318 "name": "Existed_Raid", 00:07:22.318 "uuid": "9feb539d-30d1-4c97-837a-3d4fb79de714", 00:07:22.318 "strip_size_kb": 64, 00:07:22.318 "state": "configuring", 00:07:22.318 "raid_level": "raid0", 00:07:22.318 "superblock": true, 00:07:22.318 "num_base_bdevs": 2, 00:07:22.318 "num_base_bdevs_discovered": 0, 00:07:22.318 "num_base_bdevs_operational": 2, 00:07:22.318 "base_bdevs_list": [ 00:07:22.318 { 00:07:22.318 "name": "BaseBdev1", 00:07:22.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.318 "is_configured": false, 00:07:22.318 "data_offset": 0, 00:07:22.318 "data_size": 0 00:07:22.318 }, 00:07:22.318 { 00:07:22.318 "name": "BaseBdev2", 00:07:22.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.318 "is_configured": false, 00:07:22.318 "data_offset": 0, 00:07:22.318 "data_size": 0 00:07:22.318 } 00:07:22.318 ] 00:07:22.318 }' 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.318 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.577 [2024-11-19 10:01:36.797033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.577 [2024-11-19 10:01:36.797075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.577 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.577 [2024-11-19 10:01:36.805027] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.577 [2024-11-19 10:01:36.805092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.577 [2024-11-19 10:01:36.805108] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.577 [2024-11-19 10:01:36.805149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.836 [2024-11-19 10:01:36.850374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.836 BaseBdev1 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.836 [ 00:07:22.836 { 00:07:22.836 "name": "BaseBdev1", 00:07:22.836 "aliases": [ 00:07:22.836 "b1283e28-2a44-4cae-85cb-31f822e2c81a" 00:07:22.836 ], 00:07:22.836 "product_name": "Malloc disk", 00:07:22.836 "block_size": 512, 00:07:22.836 "num_blocks": 65536, 00:07:22.836 "uuid": "b1283e28-2a44-4cae-85cb-31f822e2c81a", 00:07:22.836 "assigned_rate_limits": { 00:07:22.836 "rw_ios_per_sec": 0, 00:07:22.836 "rw_mbytes_per_sec": 0, 00:07:22.836 "r_mbytes_per_sec": 0, 00:07:22.836 "w_mbytes_per_sec": 0 00:07:22.836 }, 00:07:22.836 "claimed": true, 00:07:22.836 "claim_type": "exclusive_write", 00:07:22.836 "zoned": false, 00:07:22.836 "supported_io_types": { 00:07:22.836 "read": true, 00:07:22.836 "write": true, 00:07:22.836 "unmap": true, 00:07:22.836 "flush": true, 00:07:22.836 "reset": true, 00:07:22.836 "nvme_admin": false, 00:07:22.836 "nvme_io": false, 00:07:22.836 "nvme_io_md": false, 00:07:22.836 "write_zeroes": true, 00:07:22.836 "zcopy": true, 00:07:22.836 "get_zone_info": false, 00:07:22.836 "zone_management": false, 00:07:22.836 "zone_append": false, 00:07:22.836 "compare": false, 00:07:22.836 "compare_and_write": false, 00:07:22.836 "abort": true, 00:07:22.836 "seek_hole": false, 00:07:22.836 "seek_data": false, 00:07:22.836 "copy": true, 00:07:22.836 "nvme_iov_md": false 00:07:22.836 }, 00:07:22.836 "memory_domains": [ 00:07:22.836 { 00:07:22.836 "dma_device_id": "system", 00:07:22.836 "dma_device_type": 1 00:07:22.836 }, 00:07:22.836 { 00:07:22.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.836 "dma_device_type": 2 00:07:22.836 } 00:07:22.836 ], 00:07:22.836 "driver_specific": {} 00:07:22.836 } 00:07:22.836 ] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.836 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.837 "name": "Existed_Raid", 00:07:22.837 "uuid": "955592c9-1d5d-47d8-b942-72911dd460d7", 00:07:22.837 "strip_size_kb": 64, 00:07:22.837 "state": "configuring", 00:07:22.837 "raid_level": "raid0", 00:07:22.837 "superblock": true, 00:07:22.837 "num_base_bdevs": 2, 00:07:22.837 "num_base_bdevs_discovered": 1, 00:07:22.837 "num_base_bdevs_operational": 2, 00:07:22.837 "base_bdevs_list": [ 00:07:22.837 { 00:07:22.837 "name": "BaseBdev1", 00:07:22.837 "uuid": "b1283e28-2a44-4cae-85cb-31f822e2c81a", 00:07:22.837 "is_configured": true, 00:07:22.837 "data_offset": 2048, 00:07:22.837 "data_size": 63488 00:07:22.837 }, 00:07:22.837 { 00:07:22.837 "name": "BaseBdev2", 00:07:22.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.837 "is_configured": false, 00:07:22.837 "data_offset": 0, 00:07:22.837 "data_size": 0 00:07:22.837 } 00:07:22.837 ] 00:07:22.837 }' 00:07:22.837 10:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.837 10:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.405 [2024-11-19 10:01:37.390640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.405 [2024-11-19 10:01:37.390759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.405 [2024-11-19 10:01:37.398659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.405 [2024-11-19 10:01:37.401470] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.405 [2024-11-19 10:01:37.401666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.405 "name": "Existed_Raid", 00:07:23.405 "uuid": "c6920c22-5a3e-4d36-955f-f0a822f44669", 00:07:23.405 "strip_size_kb": 64, 00:07:23.405 "state": "configuring", 00:07:23.405 "raid_level": "raid0", 00:07:23.405 "superblock": true, 00:07:23.405 "num_base_bdevs": 2, 00:07:23.405 "num_base_bdevs_discovered": 1, 00:07:23.405 "num_base_bdevs_operational": 2, 00:07:23.405 "base_bdevs_list": [ 00:07:23.405 { 00:07:23.405 "name": "BaseBdev1", 00:07:23.405 "uuid": "b1283e28-2a44-4cae-85cb-31f822e2c81a", 00:07:23.405 "is_configured": true, 00:07:23.405 "data_offset": 2048, 00:07:23.405 "data_size": 63488 00:07:23.405 }, 00:07:23.405 { 00:07:23.405 "name": "BaseBdev2", 00:07:23.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.405 "is_configured": false, 00:07:23.405 "data_offset": 0, 00:07:23.405 "data_size": 0 00:07:23.405 } 00:07:23.405 ] 00:07:23.405 }' 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.405 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.973 [2024-11-19 10:01:37.977305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.973 [2024-11-19 10:01:37.977935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.973 [2024-11-19 10:01:37.977962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.973 BaseBdev2 00:07:23.973 [2024-11-19 10:01:37.978393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.973 [2024-11-19 10:01:37.978606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.973 [2024-11-19 10:01:37.978635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:23.973 [2024-11-19 10:01:37.978833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:23.973 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.974 10:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.974 [ 00:07:23.974 { 00:07:23.974 "name": "BaseBdev2", 00:07:23.974 "aliases": [ 00:07:23.974 "bbba6d19-0692-433b-baf8-c88316ebdcf7" 00:07:23.974 ], 00:07:23.974 "product_name": "Malloc disk", 00:07:23.974 "block_size": 512, 00:07:23.974 "num_blocks": 65536, 00:07:23.974 "uuid": "bbba6d19-0692-433b-baf8-c88316ebdcf7", 00:07:23.974 "assigned_rate_limits": { 00:07:23.974 "rw_ios_per_sec": 0, 00:07:23.974 "rw_mbytes_per_sec": 0, 00:07:23.974 "r_mbytes_per_sec": 0, 00:07:23.974 "w_mbytes_per_sec": 0 00:07:23.974 }, 00:07:23.974 "claimed": true, 00:07:23.974 "claim_type": "exclusive_write", 00:07:23.974 "zoned": false, 00:07:23.974 "supported_io_types": { 00:07:23.974 "read": true, 00:07:23.974 "write": true, 00:07:23.974 "unmap": true, 00:07:23.974 "flush": true, 00:07:23.974 "reset": true, 00:07:23.974 "nvme_admin": false, 00:07:23.974 "nvme_io": false, 00:07:23.974 "nvme_io_md": false, 00:07:23.974 "write_zeroes": true, 00:07:23.974 "zcopy": true, 00:07:23.974 "get_zone_info": false, 00:07:23.974 "zone_management": false, 00:07:23.974 "zone_append": false, 00:07:23.974 "compare": false, 00:07:23.974 "compare_and_write": false, 00:07:23.974 "abort": true, 00:07:23.974 "seek_hole": false, 00:07:23.974 "seek_data": false, 00:07:23.974 "copy": true, 00:07:23.974 "nvme_iov_md": false 00:07:23.974 }, 00:07:23.974 "memory_domains": [ 00:07:23.974 { 00:07:23.974 "dma_device_id": "system", 00:07:23.974 "dma_device_type": 1 00:07:23.974 }, 00:07:23.974 { 00:07:23.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.974 "dma_device_type": 2 00:07:23.974 } 00:07:23.974 ], 00:07:23.974 "driver_specific": {} 00:07:23.974 } 00:07:23.974 ] 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.974 "name": "Existed_Raid", 00:07:23.974 "uuid": "c6920c22-5a3e-4d36-955f-f0a822f44669", 00:07:23.974 "strip_size_kb": 64, 00:07:23.974 "state": "online", 00:07:23.974 "raid_level": "raid0", 00:07:23.974 "superblock": true, 00:07:23.974 "num_base_bdevs": 2, 00:07:23.974 "num_base_bdevs_discovered": 2, 00:07:23.974 "num_base_bdevs_operational": 2, 00:07:23.974 "base_bdevs_list": [ 00:07:23.974 { 00:07:23.974 "name": "BaseBdev1", 00:07:23.974 "uuid": "b1283e28-2a44-4cae-85cb-31f822e2c81a", 00:07:23.974 "is_configured": true, 00:07:23.974 "data_offset": 2048, 00:07:23.974 "data_size": 63488 00:07:23.974 }, 00:07:23.974 { 00:07:23.974 "name": "BaseBdev2", 00:07:23.974 "uuid": "bbba6d19-0692-433b-baf8-c88316ebdcf7", 00:07:23.974 "is_configured": true, 00:07:23.974 "data_offset": 2048, 00:07:23.974 "data_size": 63488 00:07:23.974 } 00:07:23.974 ] 00:07:23.974 }' 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.974 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.544 [2024-11-19 10:01:38.549857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.544 "name": "Existed_Raid", 00:07:24.544 "aliases": [ 00:07:24.544 "c6920c22-5a3e-4d36-955f-f0a822f44669" 00:07:24.544 ], 00:07:24.544 "product_name": "Raid Volume", 00:07:24.544 "block_size": 512, 00:07:24.544 "num_blocks": 126976, 00:07:24.544 "uuid": "c6920c22-5a3e-4d36-955f-f0a822f44669", 00:07:24.544 "assigned_rate_limits": { 00:07:24.544 "rw_ios_per_sec": 0, 00:07:24.544 "rw_mbytes_per_sec": 0, 00:07:24.544 "r_mbytes_per_sec": 0, 00:07:24.544 "w_mbytes_per_sec": 0 00:07:24.544 }, 00:07:24.544 "claimed": false, 00:07:24.544 "zoned": false, 00:07:24.544 "supported_io_types": { 00:07:24.544 "read": true, 00:07:24.544 "write": true, 00:07:24.544 "unmap": true, 00:07:24.544 "flush": true, 00:07:24.544 "reset": true, 00:07:24.544 "nvme_admin": false, 00:07:24.544 "nvme_io": false, 00:07:24.544 "nvme_io_md": false, 00:07:24.544 "write_zeroes": true, 00:07:24.544 "zcopy": false, 00:07:24.544 "get_zone_info": false, 00:07:24.544 "zone_management": false, 00:07:24.544 "zone_append": false, 00:07:24.544 "compare": false, 00:07:24.544 "compare_and_write": false, 00:07:24.544 "abort": false, 00:07:24.544 "seek_hole": false, 00:07:24.544 "seek_data": false, 00:07:24.544 "copy": false, 00:07:24.544 "nvme_iov_md": false 00:07:24.544 }, 00:07:24.544 "memory_domains": [ 00:07:24.544 { 00:07:24.544 "dma_device_id": "system", 00:07:24.544 "dma_device_type": 1 00:07:24.544 }, 00:07:24.544 { 00:07:24.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.544 "dma_device_type": 2 00:07:24.544 }, 00:07:24.544 { 00:07:24.544 "dma_device_id": "system", 00:07:24.544 "dma_device_type": 1 00:07:24.544 }, 00:07:24.544 { 00:07:24.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.544 "dma_device_type": 2 00:07:24.544 } 00:07:24.544 ], 00:07:24.544 "driver_specific": { 00:07:24.544 "raid": { 00:07:24.544 "uuid": "c6920c22-5a3e-4d36-955f-f0a822f44669", 00:07:24.544 "strip_size_kb": 64, 00:07:24.544 "state": "online", 00:07:24.544 "raid_level": "raid0", 00:07:24.544 "superblock": true, 00:07:24.544 "num_base_bdevs": 2, 00:07:24.544 "num_base_bdevs_discovered": 2, 00:07:24.544 "num_base_bdevs_operational": 2, 00:07:24.544 "base_bdevs_list": [ 00:07:24.544 { 00:07:24.544 "name": "BaseBdev1", 00:07:24.544 "uuid": "b1283e28-2a44-4cae-85cb-31f822e2c81a", 00:07:24.544 "is_configured": true, 00:07:24.544 "data_offset": 2048, 00:07:24.544 "data_size": 63488 00:07:24.544 }, 00:07:24.544 { 00:07:24.544 "name": "BaseBdev2", 00:07:24.544 "uuid": "bbba6d19-0692-433b-baf8-c88316ebdcf7", 00:07:24.544 "is_configured": true, 00:07:24.544 "data_offset": 2048, 00:07:24.544 "data_size": 63488 00:07:24.544 } 00:07:24.544 ] 00:07:24.544 } 00:07:24.544 } 00:07:24.544 }' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:24.544 BaseBdev2' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.544 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.803 [2024-11-19 10:01:38.813575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.803 [2024-11-19 10:01:38.813652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.803 [2024-11-19 10:01:38.813719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.803 "name": "Existed_Raid", 00:07:24.803 "uuid": "c6920c22-5a3e-4d36-955f-f0a822f44669", 00:07:24.803 "strip_size_kb": 64, 00:07:24.803 "state": "offline", 00:07:24.803 "raid_level": "raid0", 00:07:24.803 "superblock": true, 00:07:24.803 "num_base_bdevs": 2, 00:07:24.803 "num_base_bdevs_discovered": 1, 00:07:24.803 "num_base_bdevs_operational": 1, 00:07:24.803 "base_bdevs_list": [ 00:07:24.803 { 00:07:24.803 "name": null, 00:07:24.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.803 "is_configured": false, 00:07:24.803 "data_offset": 0, 00:07:24.803 "data_size": 63488 00:07:24.803 }, 00:07:24.803 { 00:07:24.803 "name": "BaseBdev2", 00:07:24.803 "uuid": "bbba6d19-0692-433b-baf8-c88316ebdcf7", 00:07:24.803 "is_configured": true, 00:07:24.803 "data_offset": 2048, 00:07:24.803 "data_size": 63488 00:07:24.803 } 00:07:24.803 ] 00:07:24.803 }' 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.803 10:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.370 [2024-11-19 10:01:39.471907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.370 [2024-11-19 10:01:39.471979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.370 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.371 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.371 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.371 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.371 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60725 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60725 ']' 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60725 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60725 00:07:25.629 killing process with pid 60725 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60725' 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60725 00:07:25.629 [2024-11-19 10:01:39.649369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.629 10:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60725 00:07:25.629 [2024-11-19 10:01:39.664237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.566 ************************************ 00:07:26.566 END TEST raid_state_function_test_sb 00:07:26.566 ************************************ 00:07:26.566 10:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:26.566 00:07:26.566 real 0m5.526s 00:07:26.566 user 0m8.324s 00:07:26.566 sys 0m0.799s 00:07:26.566 10:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.566 10:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.566 10:01:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:26.566 10:01:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:26.566 10:01:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.566 10:01:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.566 ************************************ 00:07:26.566 START TEST raid_superblock_test 00:07:26.566 ************************************ 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60977 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60977 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60977 ']' 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.566 10:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.826 [2024-11-19 10:01:40.891252] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:26.826 [2024-11-19 10:01:40.891456] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60977 ] 00:07:27.086 [2024-11-19 10:01:41.078820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.086 [2024-11-19 10:01:41.229925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.373 [2024-11-19 10:01:41.464134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.373 [2024-11-19 10:01:41.464430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.939 malloc1 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.939 [2024-11-19 10:01:41.985469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.939 [2024-11-19 10:01:41.985716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.939 [2024-11-19 10:01:41.985766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:27.939 [2024-11-19 10:01:41.985804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.939 [2024-11-19 10:01:41.988974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.939 [2024-11-19 10:01:41.989020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.939 pt1 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.939 10:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.939 malloc2 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.939 [2024-11-19 10:01:42.046996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.939 [2024-11-19 10:01:42.047116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.939 [2024-11-19 10:01:42.047161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:27.939 [2024-11-19 10:01:42.047175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.939 [2024-11-19 10:01:42.050153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.939 [2024-11-19 10:01:42.050205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.939 pt2 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.939 [2024-11-19 10:01:42.059062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.939 [2024-11-19 10:01:42.061527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.939 [2024-11-19 10:01:42.061706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:27.939 [2024-11-19 10:01:42.061723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.939 [2024-11-19 10:01:42.062084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.939 [2024-11-19 10:01:42.062312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:27.939 [2024-11-19 10:01:42.062354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:27.939 [2024-11-19 10:01:42.062520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:27.939 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.940 "name": "raid_bdev1", 00:07:27.940 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:27.940 "strip_size_kb": 64, 00:07:27.940 "state": "online", 00:07:27.940 "raid_level": "raid0", 00:07:27.940 "superblock": true, 00:07:27.940 "num_base_bdevs": 2, 00:07:27.940 "num_base_bdevs_discovered": 2, 00:07:27.940 "num_base_bdevs_operational": 2, 00:07:27.940 "base_bdevs_list": [ 00:07:27.940 { 00:07:27.940 "name": "pt1", 00:07:27.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.940 "is_configured": true, 00:07:27.940 "data_offset": 2048, 00:07:27.940 "data_size": 63488 00:07:27.940 }, 00:07:27.940 { 00:07:27.940 "name": "pt2", 00:07:27.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.940 "is_configured": true, 00:07:27.940 "data_offset": 2048, 00:07:27.940 "data_size": 63488 00:07:27.940 } 00:07:27.940 ] 00:07:27.940 }' 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.940 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.505 [2024-11-19 10:01:42.583742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.505 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.505 "name": "raid_bdev1", 00:07:28.505 "aliases": [ 00:07:28.505 "fb39aefb-4267-432e-aec3-205ac6b8fa51" 00:07:28.505 ], 00:07:28.505 "product_name": "Raid Volume", 00:07:28.505 "block_size": 512, 00:07:28.505 "num_blocks": 126976, 00:07:28.505 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:28.505 "assigned_rate_limits": { 00:07:28.505 "rw_ios_per_sec": 0, 00:07:28.505 "rw_mbytes_per_sec": 0, 00:07:28.505 "r_mbytes_per_sec": 0, 00:07:28.505 "w_mbytes_per_sec": 0 00:07:28.505 }, 00:07:28.505 "claimed": false, 00:07:28.505 "zoned": false, 00:07:28.505 "supported_io_types": { 00:07:28.505 "read": true, 00:07:28.505 "write": true, 00:07:28.505 "unmap": true, 00:07:28.505 "flush": true, 00:07:28.505 "reset": true, 00:07:28.505 "nvme_admin": false, 00:07:28.505 "nvme_io": false, 00:07:28.505 "nvme_io_md": false, 00:07:28.505 "write_zeroes": true, 00:07:28.505 "zcopy": false, 00:07:28.505 "get_zone_info": false, 00:07:28.505 "zone_management": false, 00:07:28.505 "zone_append": false, 00:07:28.505 "compare": false, 00:07:28.505 "compare_and_write": false, 00:07:28.505 "abort": false, 00:07:28.505 "seek_hole": false, 00:07:28.505 "seek_data": false, 00:07:28.505 "copy": false, 00:07:28.505 "nvme_iov_md": false 00:07:28.505 }, 00:07:28.505 "memory_domains": [ 00:07:28.505 { 00:07:28.505 "dma_device_id": "system", 00:07:28.505 "dma_device_type": 1 00:07:28.505 }, 00:07:28.505 { 00:07:28.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.505 "dma_device_type": 2 00:07:28.505 }, 00:07:28.505 { 00:07:28.505 "dma_device_id": "system", 00:07:28.505 "dma_device_type": 1 00:07:28.505 }, 00:07:28.505 { 00:07:28.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.505 "dma_device_type": 2 00:07:28.505 } 00:07:28.505 ], 00:07:28.505 "driver_specific": { 00:07:28.505 "raid": { 00:07:28.505 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:28.505 "strip_size_kb": 64, 00:07:28.505 "state": "online", 00:07:28.505 "raid_level": "raid0", 00:07:28.505 "superblock": true, 00:07:28.505 "num_base_bdevs": 2, 00:07:28.505 "num_base_bdevs_discovered": 2, 00:07:28.505 "num_base_bdevs_operational": 2, 00:07:28.505 "base_bdevs_list": [ 00:07:28.505 { 00:07:28.505 "name": "pt1", 00:07:28.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.505 "is_configured": true, 00:07:28.505 "data_offset": 2048, 00:07:28.505 "data_size": 63488 00:07:28.505 }, 00:07:28.505 { 00:07:28.505 "name": "pt2", 00:07:28.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.505 "is_configured": true, 00:07:28.505 "data_offset": 2048, 00:07:28.505 "data_size": 63488 00:07:28.505 } 00:07:28.506 ] 00:07:28.506 } 00:07:28.506 } 00:07:28.506 }' 00:07:28.506 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.506 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.506 pt2' 00:07:28.506 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 [2024-11-19 10:01:42.859844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb39aefb-4267-432e-aec3-205ac6b8fa51 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fb39aefb-4267-432e-aec3-205ac6b8fa51 ']' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 [2024-11-19 10:01:42.907407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.764 [2024-11-19 10:01:42.907439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.764 [2024-11-19 10:01:42.907553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.764 [2024-11-19 10:01:42.907651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.764 [2024-11-19 10:01:42.907675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:28.764 10:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.023 [2024-11-19 10:01:43.047481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:29.023 [2024-11-19 10:01:43.050409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:29.023 [2024-11-19 10:01:43.050492] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:29.023 [2024-11-19 10:01:43.050571] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:29.023 [2024-11-19 10:01:43.050596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.023 [2024-11-19 10:01:43.050612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:29.023 request: 00:07:29.023 { 00:07:29.023 "name": "raid_bdev1", 00:07:29.023 "raid_level": "raid0", 00:07:29.023 "base_bdevs": [ 00:07:29.023 "malloc1", 00:07:29.023 "malloc2" 00:07:29.023 ], 00:07:29.023 "strip_size_kb": 64, 00:07:29.023 "superblock": false, 00:07:29.023 "method": "bdev_raid_create", 00:07:29.023 "req_id": 1 00:07:29.023 } 00:07:29.023 Got JSON-RPC error response 00:07:29.023 response: 00:07:29.023 { 00:07:29.023 "code": -17, 00:07:29.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:29.023 } 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.023 [2024-11-19 10:01:43.115428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.023 [2024-11-19 10:01:43.115639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.023 [2024-11-19 10:01:43.115724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:29.023 [2024-11-19 10:01:43.115884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.023 [2024-11-19 10:01:43.119016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.023 [2024-11-19 10:01:43.119259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.023 [2024-11-19 10:01:43.119446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:29.023 [2024-11-19 10:01:43.119610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.023 pt1 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.023 "name": "raid_bdev1", 00:07:29.023 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:29.023 "strip_size_kb": 64, 00:07:29.023 "state": "configuring", 00:07:29.023 "raid_level": "raid0", 00:07:29.023 "superblock": true, 00:07:29.023 "num_base_bdevs": 2, 00:07:29.023 "num_base_bdevs_discovered": 1, 00:07:29.023 "num_base_bdevs_operational": 2, 00:07:29.023 "base_bdevs_list": [ 00:07:29.023 { 00:07:29.023 "name": "pt1", 00:07:29.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.023 "is_configured": true, 00:07:29.023 "data_offset": 2048, 00:07:29.023 "data_size": 63488 00:07:29.023 }, 00:07:29.023 { 00:07:29.023 "name": null, 00:07:29.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.023 "is_configured": false, 00:07:29.023 "data_offset": 2048, 00:07:29.023 "data_size": 63488 00:07:29.023 } 00:07:29.023 ] 00:07:29.023 }' 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.023 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.591 [2024-11-19 10:01:43.631662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:29.591 [2024-11-19 10:01:43.631901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.591 [2024-11-19 10:01:43.631941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:29.591 [2024-11-19 10:01:43.631961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.591 [2024-11-19 10:01:43.632582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.591 [2024-11-19 10:01:43.632657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:29.591 [2024-11-19 10:01:43.632816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:29.591 [2024-11-19 10:01:43.632865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:29.591 [2024-11-19 10:01:43.633003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:29.591 [2024-11-19 10:01:43.633030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.591 [2024-11-19 10:01:43.633335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:29.591 [2024-11-19 10:01:43.633506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:29.591 [2024-11-19 10:01:43.633527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:29.591 [2024-11-19 10:01:43.633681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.591 pt2 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.591 "name": "raid_bdev1", 00:07:29.591 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:29.591 "strip_size_kb": 64, 00:07:29.591 "state": "online", 00:07:29.591 "raid_level": "raid0", 00:07:29.591 "superblock": true, 00:07:29.591 "num_base_bdevs": 2, 00:07:29.591 "num_base_bdevs_discovered": 2, 00:07:29.591 "num_base_bdevs_operational": 2, 00:07:29.591 "base_bdevs_list": [ 00:07:29.591 { 00:07:29.591 "name": "pt1", 00:07:29.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.591 "is_configured": true, 00:07:29.591 "data_offset": 2048, 00:07:29.591 "data_size": 63488 00:07:29.591 }, 00:07:29.591 { 00:07:29.591 "name": "pt2", 00:07:29.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.591 "is_configured": true, 00:07:29.591 "data_offset": 2048, 00:07:29.591 "data_size": 63488 00:07:29.591 } 00:07:29.591 ] 00:07:29.591 }' 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.591 10:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 [2024-11-19 10:01:44.172153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.159 "name": "raid_bdev1", 00:07:30.159 "aliases": [ 00:07:30.159 "fb39aefb-4267-432e-aec3-205ac6b8fa51" 00:07:30.159 ], 00:07:30.159 "product_name": "Raid Volume", 00:07:30.159 "block_size": 512, 00:07:30.159 "num_blocks": 126976, 00:07:30.159 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:30.159 "assigned_rate_limits": { 00:07:30.159 "rw_ios_per_sec": 0, 00:07:30.159 "rw_mbytes_per_sec": 0, 00:07:30.159 "r_mbytes_per_sec": 0, 00:07:30.159 "w_mbytes_per_sec": 0 00:07:30.159 }, 00:07:30.159 "claimed": false, 00:07:30.159 "zoned": false, 00:07:30.159 "supported_io_types": { 00:07:30.159 "read": true, 00:07:30.159 "write": true, 00:07:30.159 "unmap": true, 00:07:30.159 "flush": true, 00:07:30.159 "reset": true, 00:07:30.159 "nvme_admin": false, 00:07:30.159 "nvme_io": false, 00:07:30.159 "nvme_io_md": false, 00:07:30.159 "write_zeroes": true, 00:07:30.159 "zcopy": false, 00:07:30.159 "get_zone_info": false, 00:07:30.159 "zone_management": false, 00:07:30.159 "zone_append": false, 00:07:30.159 "compare": false, 00:07:30.159 "compare_and_write": false, 00:07:30.159 "abort": false, 00:07:30.159 "seek_hole": false, 00:07:30.159 "seek_data": false, 00:07:30.159 "copy": false, 00:07:30.159 "nvme_iov_md": false 00:07:30.159 }, 00:07:30.159 "memory_domains": [ 00:07:30.159 { 00:07:30.159 "dma_device_id": "system", 00:07:30.159 "dma_device_type": 1 00:07:30.159 }, 00:07:30.159 { 00:07:30.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.159 "dma_device_type": 2 00:07:30.159 }, 00:07:30.159 { 00:07:30.159 "dma_device_id": "system", 00:07:30.159 "dma_device_type": 1 00:07:30.159 }, 00:07:30.159 { 00:07:30.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.159 "dma_device_type": 2 00:07:30.159 } 00:07:30.159 ], 00:07:30.159 "driver_specific": { 00:07:30.159 "raid": { 00:07:30.159 "uuid": "fb39aefb-4267-432e-aec3-205ac6b8fa51", 00:07:30.159 "strip_size_kb": 64, 00:07:30.159 "state": "online", 00:07:30.159 "raid_level": "raid0", 00:07:30.159 "superblock": true, 00:07:30.159 "num_base_bdevs": 2, 00:07:30.159 "num_base_bdevs_discovered": 2, 00:07:30.159 "num_base_bdevs_operational": 2, 00:07:30.159 "base_bdevs_list": [ 00:07:30.159 { 00:07:30.159 "name": "pt1", 00:07:30.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.159 "is_configured": true, 00:07:30.159 "data_offset": 2048, 00:07:30.159 "data_size": 63488 00:07:30.159 }, 00:07:30.159 { 00:07:30.159 "name": "pt2", 00:07:30.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.159 "is_configured": true, 00:07:30.159 "data_offset": 2048, 00:07:30.159 "data_size": 63488 00:07:30.159 } 00:07:30.159 ] 00:07:30.159 } 00:07:30.159 } 00:07:30.159 }' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:30.159 pt2' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:30.418 [2024-11-19 10:01:44.428175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fb39aefb-4267-432e-aec3-205ac6b8fa51 '!=' fb39aefb-4267-432e-aec3-205ac6b8fa51 ']' 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60977 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60977 ']' 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60977 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60977 00:07:30.418 killing process with pid 60977 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60977' 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60977 00:07:30.418 [2024-11-19 10:01:44.515346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.418 10:01:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60977 00:07:30.418 [2024-11-19 10:01:44.515440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.418 [2024-11-19 10:01:44.515498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.418 [2024-11-19 10:01:44.515515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:30.677 [2024-11-19 10:01:44.678102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.613 10:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:31.613 00:07:31.613 real 0m4.906s 00:07:31.613 user 0m7.230s 00:07:31.613 sys 0m0.785s 00:07:31.613 10:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.613 10:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.613 ************************************ 00:07:31.613 END TEST raid_superblock_test 00:07:31.613 ************************************ 00:07:31.614 10:01:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:31.614 10:01:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.614 10:01:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.614 10:01:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.614 ************************************ 00:07:31.614 START TEST raid_read_error_test 00:07:31.614 ************************************ 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sZacmYqijt 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61194 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61194 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61194 ']' 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.614 10:01:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.873 [2024-11-19 10:01:45.867081] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:31.874 [2024-11-19 10:01:45.867572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61194 ] 00:07:31.874 [2024-11-19 10:01:46.052818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.132 [2024-11-19 10:01:46.185052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.392 [2024-11-19 10:01:46.399985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.392 [2024-11-19 10:01:46.400054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.651 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.651 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.651 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.651 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:32.651 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.651 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 BaseBdev1_malloc 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 true 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 [2024-11-19 10:01:46.911723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:32.911 [2024-11-19 10:01:46.911818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.911 [2024-11-19 10:01:46.911852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:32.911 [2024-11-19 10:01:46.911871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.911 [2024-11-19 10:01:46.915088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.911 [2024-11-19 10:01:46.915182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:32.911 BaseBdev1 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 BaseBdev2_malloc 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 true 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 [2024-11-19 10:01:46.973933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:32.911 [2024-11-19 10:01:46.974006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.911 [2024-11-19 10:01:46.974033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:32.911 [2024-11-19 10:01:46.974052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.911 [2024-11-19 10:01:46.977031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.911 [2024-11-19 10:01:46.977310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:32.911 BaseBdev2 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.911 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.911 [2024-11-19 10:01:46.982229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.912 [2024-11-19 10:01:46.985066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.912 [2024-11-19 10:01:46.985377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.912 [2024-11-19 10:01:46.985401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.912 [2024-11-19 10:01:46.985746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:32.912 [2024-11-19 10:01:46.986035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.912 [2024-11-19 10:01:46.986056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:32.912 [2024-11-19 10:01:46.986293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.912 10:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.912 10:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.912 10:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.912 "name": "raid_bdev1", 00:07:32.912 "uuid": "73a85902-f538-4b81-a1a9-e1b2ba82ab43", 00:07:32.912 "strip_size_kb": 64, 00:07:32.912 "state": "online", 00:07:32.912 "raid_level": "raid0", 00:07:32.912 "superblock": true, 00:07:32.912 "num_base_bdevs": 2, 00:07:32.912 "num_base_bdevs_discovered": 2, 00:07:32.912 "num_base_bdevs_operational": 2, 00:07:32.912 "base_bdevs_list": [ 00:07:32.912 { 00:07:32.912 "name": "BaseBdev1", 00:07:32.912 "uuid": "07dfd750-6a7a-598a-8c45-c4a532c700e6", 00:07:32.912 "is_configured": true, 00:07:32.912 "data_offset": 2048, 00:07:32.912 "data_size": 63488 00:07:32.912 }, 00:07:32.912 { 00:07:32.912 "name": "BaseBdev2", 00:07:32.912 "uuid": "bd776c2d-0467-5c30-a3b8-5a0e4daee366", 00:07:32.912 "is_configured": true, 00:07:32.912 "data_offset": 2048, 00:07:32.912 "data_size": 63488 00:07:32.912 } 00:07:32.912 ] 00:07:32.912 }' 00:07:32.912 10:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.912 10:01:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.481 10:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.481 10:01:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.481 [2024-11-19 10:01:47.644097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.418 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.419 "name": "raid_bdev1", 00:07:34.419 "uuid": "73a85902-f538-4b81-a1a9-e1b2ba82ab43", 00:07:34.419 "strip_size_kb": 64, 00:07:34.419 "state": "online", 00:07:34.419 "raid_level": "raid0", 00:07:34.419 "superblock": true, 00:07:34.419 "num_base_bdevs": 2, 00:07:34.419 "num_base_bdevs_discovered": 2, 00:07:34.419 "num_base_bdevs_operational": 2, 00:07:34.419 "base_bdevs_list": [ 00:07:34.419 { 00:07:34.419 "name": "BaseBdev1", 00:07:34.419 "uuid": "07dfd750-6a7a-598a-8c45-c4a532c700e6", 00:07:34.419 "is_configured": true, 00:07:34.419 "data_offset": 2048, 00:07:34.419 "data_size": 63488 00:07:34.419 }, 00:07:34.419 { 00:07:34.419 "name": "BaseBdev2", 00:07:34.419 "uuid": "bd776c2d-0467-5c30-a3b8-5a0e4daee366", 00:07:34.419 "is_configured": true, 00:07:34.419 "data_offset": 2048, 00:07:34.419 "data_size": 63488 00:07:34.419 } 00:07:34.419 ] 00:07:34.419 }' 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.419 10:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.988 [2024-11-19 10:01:49.092357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.988 [2024-11-19 10:01:49.092403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.988 [2024-11-19 10:01:49.096163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.988 [2024-11-19 10:01:49.096228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.988 [2024-11-19 10:01:49.096300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.988 [2024-11-19 10:01:49.096321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:34.988 { 00:07:34.988 "results": [ 00:07:34.988 { 00:07:34.988 "job": "raid_bdev1", 00:07:34.988 "core_mask": "0x1", 00:07:34.988 "workload": "randrw", 00:07:34.988 "percentage": 50, 00:07:34.988 "status": "finished", 00:07:34.988 "queue_depth": 1, 00:07:34.988 "io_size": 131072, 00:07:34.988 "runtime": 1.445383, 00:07:34.988 "iops": 10025.024509074758, 00:07:34.988 "mibps": 1253.1280636343447, 00:07:34.988 "io_failed": 1, 00:07:34.988 "io_timeout": 0, 00:07:34.988 "avg_latency_us": 139.21557229879357, 00:07:34.988 "min_latency_us": 36.77090909090909, 00:07:34.988 "max_latency_us": 2055.447272727273 00:07:34.988 } 00:07:34.988 ], 00:07:34.988 "core_count": 1 00:07:34.988 } 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61194 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61194 ']' 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61194 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61194 00:07:34.988 killing process with pid 61194 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61194' 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61194 00:07:34.988 [2024-11-19 10:01:49.134759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.988 10:01:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61194 00:07:35.247 [2024-11-19 10:01:49.273532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sZacmYqijt 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.687 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.688 10:01:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:07:36.688 00:07:36.688 real 0m4.742s 00:07:36.688 user 0m5.897s 00:07:36.688 sys 0m0.634s 00:07:36.688 ************************************ 00:07:36.688 END TEST raid_read_error_test 00:07:36.688 ************************************ 00:07:36.688 10:01:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.688 10:01:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.688 10:01:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:36.688 10:01:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.688 10:01:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.688 10:01:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.688 ************************************ 00:07:36.688 START TEST raid_write_error_test 00:07:36.688 ************************************ 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HgI3Klap0A 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61344 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61344 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61344 ']' 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.688 10:01:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.688 [2024-11-19 10:01:50.659531] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:36.688 [2024-11-19 10:01:50.659739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61344 ] 00:07:36.688 [2024-11-19 10:01:50.848376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.947 [2024-11-19 10:01:50.997166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.206 [2024-11-19 10:01:51.229769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.206 [2024-11-19 10:01:51.229830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 BaseBdev1_malloc 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 true 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 [2024-11-19 10:01:51.772365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:37.774 [2024-11-19 10:01:51.772448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.774 [2024-11-19 10:01:51.772479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:37.774 [2024-11-19 10:01:51.772498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.774 [2024-11-19 10:01:51.775462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.774 [2024-11-19 10:01:51.775513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:37.774 BaseBdev1 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 BaseBdev2_malloc 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 true 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 [2024-11-19 10:01:51.837106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:37.774 [2024-11-19 10:01:51.837366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.774 [2024-11-19 10:01:51.837400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:37.774 [2024-11-19 10:01:51.837419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.774 [2024-11-19 10:01:51.840564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.774 [2024-11-19 10:01:51.840771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:37.774 BaseBdev2 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 [2024-11-19 10:01:51.845178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.774 [2024-11-19 10:01:51.847745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.774 [2024-11-19 10:01:51.848060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.774 [2024-11-19 10:01:51.848082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.774 [2024-11-19 10:01:51.848442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:37.774 [2024-11-19 10:01:51.848711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.774 [2024-11-19 10:01:51.848731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:37.774 [2024-11-19 10:01:51.849005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.774 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.775 "name": "raid_bdev1", 00:07:37.775 "uuid": "e0db56a8-0dc9-4f41-b5e3-47c46f08b881", 00:07:37.775 "strip_size_kb": 64, 00:07:37.775 "state": "online", 00:07:37.775 "raid_level": "raid0", 00:07:37.775 "superblock": true, 00:07:37.775 "num_base_bdevs": 2, 00:07:37.775 "num_base_bdevs_discovered": 2, 00:07:37.775 "num_base_bdevs_operational": 2, 00:07:37.775 "base_bdevs_list": [ 00:07:37.775 { 00:07:37.775 "name": "BaseBdev1", 00:07:37.775 "uuid": "4e95cb6d-e69d-5219-a906-cd61ababab7e", 00:07:37.775 "is_configured": true, 00:07:37.775 "data_offset": 2048, 00:07:37.775 "data_size": 63488 00:07:37.775 }, 00:07:37.775 { 00:07:37.775 "name": "BaseBdev2", 00:07:37.775 "uuid": "3a1890f9-5b70-5ea0-8c72-1a9355134164", 00:07:37.775 "is_configured": true, 00:07:37.775 "data_offset": 2048, 00:07:37.775 "data_size": 63488 00:07:37.775 } 00:07:37.775 ] 00:07:37.775 }' 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.775 10:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.342 10:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:38.342 10:01:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:38.342 [2024-11-19 10:01:52.470749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:39.277 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:39.277 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.278 "name": "raid_bdev1", 00:07:39.278 "uuid": "e0db56a8-0dc9-4f41-b5e3-47c46f08b881", 00:07:39.278 "strip_size_kb": 64, 00:07:39.278 "state": "online", 00:07:39.278 "raid_level": "raid0", 00:07:39.278 "superblock": true, 00:07:39.278 "num_base_bdevs": 2, 00:07:39.278 "num_base_bdevs_discovered": 2, 00:07:39.278 "num_base_bdevs_operational": 2, 00:07:39.278 "base_bdevs_list": [ 00:07:39.278 { 00:07:39.278 "name": "BaseBdev1", 00:07:39.278 "uuid": "4e95cb6d-e69d-5219-a906-cd61ababab7e", 00:07:39.278 "is_configured": true, 00:07:39.278 "data_offset": 2048, 00:07:39.278 "data_size": 63488 00:07:39.278 }, 00:07:39.278 { 00:07:39.278 "name": "BaseBdev2", 00:07:39.278 "uuid": "3a1890f9-5b70-5ea0-8c72-1a9355134164", 00:07:39.278 "is_configured": true, 00:07:39.278 "data_offset": 2048, 00:07:39.278 "data_size": 63488 00:07:39.278 } 00:07:39.278 ] 00:07:39.278 }' 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.278 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.846 [2024-11-19 10:01:53.904562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.846 [2024-11-19 10:01:53.904755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.846 [2024-11-19 10:01:53.908319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.846 { 00:07:39.846 "results": [ 00:07:39.846 { 00:07:39.846 "job": "raid_bdev1", 00:07:39.846 "core_mask": "0x1", 00:07:39.846 "workload": "randrw", 00:07:39.846 "percentage": 50, 00:07:39.846 "status": "finished", 00:07:39.846 "queue_depth": 1, 00:07:39.846 "io_size": 131072, 00:07:39.846 "runtime": 1.431548, 00:07:39.846 "iops": 10301.43592810021, 00:07:39.846 "mibps": 1287.6794910125263, 00:07:39.846 "io_failed": 1, 00:07:39.846 "io_timeout": 0, 00:07:39.846 "avg_latency_us": 136.5848164311956, 00:07:39.846 "min_latency_us": 35.60727272727273, 00:07:39.846 "max_latency_us": 1705.4254545454546 00:07:39.846 } 00:07:39.846 ], 00:07:39.846 "core_count": 1 00:07:39.846 } 00:07:39.846 [2024-11-19 10:01:53.908500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.846 [2024-11-19 10:01:53.908562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.846 [2024-11-19 10:01:53.908584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61344 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61344 ']' 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61344 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61344 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.846 killing process with pid 61344 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61344' 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61344 00:07:39.846 [2024-11-19 10:01:53.950063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.846 10:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61344 00:07:39.846 [2024-11-19 10:01:54.066282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HgI3Klap0A 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:41.224 00:07:41.224 real 0m4.716s 00:07:41.224 user 0m5.853s 00:07:41.224 sys 0m0.658s 00:07:41.224 ************************************ 00:07:41.224 END TEST raid_write_error_test 00:07:41.224 ************************************ 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.224 10:01:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.224 10:01:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:41.224 10:01:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:41.224 10:01:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.224 10:01:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.224 10:01:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.224 ************************************ 00:07:41.224 START TEST raid_state_function_test 00:07:41.224 ************************************ 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.224 Process raid pid: 61489 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61489 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61489' 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61489 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61489 ']' 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.224 10:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.224 [2024-11-19 10:01:55.422472] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:41.224 [2024-11-19 10:01:55.422683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.480 [2024-11-19 10:01:55.608132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.737 [2024-11-19 10:01:55.758961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.035 [2024-11-19 10:01:55.988120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.035 [2024-11-19 10:01:55.988192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.315 [2024-11-19 10:01:56.434978] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.315 [2024-11-19 10:01:56.435052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.315 [2024-11-19 10:01:56.435072] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.315 [2024-11-19 10:01:56.435089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.315 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.316 "name": "Existed_Raid", 00:07:42.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.316 "strip_size_kb": 64, 00:07:42.316 "state": "configuring", 00:07:42.316 "raid_level": "concat", 00:07:42.316 "superblock": false, 00:07:42.316 "num_base_bdevs": 2, 00:07:42.316 "num_base_bdevs_discovered": 0, 00:07:42.316 "num_base_bdevs_operational": 2, 00:07:42.316 "base_bdevs_list": [ 00:07:42.316 { 00:07:42.316 "name": "BaseBdev1", 00:07:42.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.316 "is_configured": false, 00:07:42.316 "data_offset": 0, 00:07:42.316 "data_size": 0 00:07:42.316 }, 00:07:42.316 { 00:07:42.316 "name": "BaseBdev2", 00:07:42.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.316 "is_configured": false, 00:07:42.316 "data_offset": 0, 00:07:42.316 "data_size": 0 00:07:42.316 } 00:07:42.316 ] 00:07:42.316 }' 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.316 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 [2024-11-19 10:01:56.963015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.884 [2024-11-19 10:01:56.963065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 [2024-11-19 10:01:56.970975] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.884 [2024-11-19 10:01:56.971031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.884 [2024-11-19 10:01:56.971048] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.884 [2024-11-19 10:01:56.971067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.884 10:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 [2024-11-19 10:01:57.021231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.884 BaseBdev1 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 [ 00:07:42.884 { 00:07:42.884 "name": "BaseBdev1", 00:07:42.884 "aliases": [ 00:07:42.884 "c7883320-7367-42ec-a23a-dd3cf54bcd11" 00:07:42.884 ], 00:07:42.884 "product_name": "Malloc disk", 00:07:42.884 "block_size": 512, 00:07:42.884 "num_blocks": 65536, 00:07:42.884 "uuid": "c7883320-7367-42ec-a23a-dd3cf54bcd11", 00:07:42.884 "assigned_rate_limits": { 00:07:42.884 "rw_ios_per_sec": 0, 00:07:42.884 "rw_mbytes_per_sec": 0, 00:07:42.884 "r_mbytes_per_sec": 0, 00:07:42.884 "w_mbytes_per_sec": 0 00:07:42.884 }, 00:07:42.884 "claimed": true, 00:07:42.884 "claim_type": "exclusive_write", 00:07:42.884 "zoned": false, 00:07:42.884 "supported_io_types": { 00:07:42.884 "read": true, 00:07:42.884 "write": true, 00:07:42.884 "unmap": true, 00:07:42.884 "flush": true, 00:07:42.884 "reset": true, 00:07:42.884 "nvme_admin": false, 00:07:42.884 "nvme_io": false, 00:07:42.884 "nvme_io_md": false, 00:07:42.884 "write_zeroes": true, 00:07:42.884 "zcopy": true, 00:07:42.884 "get_zone_info": false, 00:07:42.884 "zone_management": false, 00:07:42.884 "zone_append": false, 00:07:42.884 "compare": false, 00:07:42.884 "compare_and_write": false, 00:07:42.884 "abort": true, 00:07:42.884 "seek_hole": false, 00:07:42.884 "seek_data": false, 00:07:42.884 "copy": true, 00:07:42.884 "nvme_iov_md": false 00:07:42.884 }, 00:07:42.884 "memory_domains": [ 00:07:42.884 { 00:07:42.884 "dma_device_id": "system", 00:07:42.884 "dma_device_type": 1 00:07:42.884 }, 00:07:42.884 { 00:07:42.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.884 "dma_device_type": 2 00:07:42.884 } 00:07:42.884 ], 00:07:42.884 "driver_specific": {} 00:07:42.884 } 00:07:42.884 ] 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.884 "name": "Existed_Raid", 00:07:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.884 "strip_size_kb": 64, 00:07:42.884 "state": "configuring", 00:07:42.884 "raid_level": "concat", 00:07:42.884 "superblock": false, 00:07:42.884 "num_base_bdevs": 2, 00:07:42.884 "num_base_bdevs_discovered": 1, 00:07:42.884 "num_base_bdevs_operational": 2, 00:07:42.884 "base_bdevs_list": [ 00:07:42.884 { 00:07:42.884 "name": "BaseBdev1", 00:07:42.884 "uuid": "c7883320-7367-42ec-a23a-dd3cf54bcd11", 00:07:42.884 "is_configured": true, 00:07:42.884 "data_offset": 0, 00:07:42.884 "data_size": 65536 00:07:42.884 }, 00:07:42.884 { 00:07:42.884 "name": "BaseBdev2", 00:07:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.884 "is_configured": false, 00:07:42.884 "data_offset": 0, 00:07:42.884 "data_size": 0 00:07:42.884 } 00:07:42.884 ] 00:07:42.884 }' 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.884 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 [2024-11-19 10:01:57.577461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.451 [2024-11-19 10:01:57.577675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 [2024-11-19 10:01:57.585480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.451 [2024-11-19 10:01:57.588148] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.451 [2024-11-19 10:01:57.588349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.451 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.452 "name": "Existed_Raid", 00:07:43.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.452 "strip_size_kb": 64, 00:07:43.452 "state": "configuring", 00:07:43.452 "raid_level": "concat", 00:07:43.452 "superblock": false, 00:07:43.452 "num_base_bdevs": 2, 00:07:43.452 "num_base_bdevs_discovered": 1, 00:07:43.452 "num_base_bdevs_operational": 2, 00:07:43.452 "base_bdevs_list": [ 00:07:43.452 { 00:07:43.452 "name": "BaseBdev1", 00:07:43.452 "uuid": "c7883320-7367-42ec-a23a-dd3cf54bcd11", 00:07:43.452 "is_configured": true, 00:07:43.452 "data_offset": 0, 00:07:43.452 "data_size": 65536 00:07:43.452 }, 00:07:43.452 { 00:07:43.452 "name": "BaseBdev2", 00:07:43.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.452 "is_configured": false, 00:07:43.452 "data_offset": 0, 00:07:43.452 "data_size": 0 00:07:43.452 } 00:07:43.452 ] 00:07:43.452 }' 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.452 10:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 [2024-11-19 10:01:58.132944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.018 [2024-11-19 10:01:58.133275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.018 [2024-11-19 10:01:58.133330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:44.018 [2024-11-19 10:01:58.133807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.018 [2024-11-19 10:01:58.134045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.018 [2024-11-19 10:01:58.134071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:44.018 [2024-11-19 10:01:58.134436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.018 BaseBdev2 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 [ 00:07:44.018 { 00:07:44.018 "name": "BaseBdev2", 00:07:44.018 "aliases": [ 00:07:44.018 "a7be40c5-0568-4bf1-96a4-5028cfb7191b" 00:07:44.018 ], 00:07:44.018 "product_name": "Malloc disk", 00:07:44.018 "block_size": 512, 00:07:44.018 "num_blocks": 65536, 00:07:44.018 "uuid": "a7be40c5-0568-4bf1-96a4-5028cfb7191b", 00:07:44.018 "assigned_rate_limits": { 00:07:44.018 "rw_ios_per_sec": 0, 00:07:44.018 "rw_mbytes_per_sec": 0, 00:07:44.018 "r_mbytes_per_sec": 0, 00:07:44.018 "w_mbytes_per_sec": 0 00:07:44.018 }, 00:07:44.018 "claimed": true, 00:07:44.018 "claim_type": "exclusive_write", 00:07:44.018 "zoned": false, 00:07:44.018 "supported_io_types": { 00:07:44.018 "read": true, 00:07:44.018 "write": true, 00:07:44.018 "unmap": true, 00:07:44.018 "flush": true, 00:07:44.018 "reset": true, 00:07:44.018 "nvme_admin": false, 00:07:44.018 "nvme_io": false, 00:07:44.018 "nvme_io_md": false, 00:07:44.018 "write_zeroes": true, 00:07:44.018 "zcopy": true, 00:07:44.018 "get_zone_info": false, 00:07:44.018 "zone_management": false, 00:07:44.018 "zone_append": false, 00:07:44.018 "compare": false, 00:07:44.018 "compare_and_write": false, 00:07:44.018 "abort": true, 00:07:44.018 "seek_hole": false, 00:07:44.018 "seek_data": false, 00:07:44.018 "copy": true, 00:07:44.018 "nvme_iov_md": false 00:07:44.018 }, 00:07:44.018 "memory_domains": [ 00:07:44.018 { 00:07:44.018 "dma_device_id": "system", 00:07:44.018 "dma_device_type": 1 00:07:44.018 }, 00:07:44.018 { 00:07:44.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.018 "dma_device_type": 2 00:07:44.018 } 00:07:44.018 ], 00:07:44.018 "driver_specific": {} 00:07:44.018 } 00:07:44.018 ] 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.018 "name": "Existed_Raid", 00:07:44.018 "uuid": "61d0b73d-bdd5-47af-9447-a1a996f74bf7", 00:07:44.018 "strip_size_kb": 64, 00:07:44.018 "state": "online", 00:07:44.018 "raid_level": "concat", 00:07:44.018 "superblock": false, 00:07:44.018 "num_base_bdevs": 2, 00:07:44.018 "num_base_bdevs_discovered": 2, 00:07:44.018 "num_base_bdevs_operational": 2, 00:07:44.018 "base_bdevs_list": [ 00:07:44.018 { 00:07:44.018 "name": "BaseBdev1", 00:07:44.018 "uuid": "c7883320-7367-42ec-a23a-dd3cf54bcd11", 00:07:44.018 "is_configured": true, 00:07:44.018 "data_offset": 0, 00:07:44.018 "data_size": 65536 00:07:44.018 }, 00:07:44.018 { 00:07:44.018 "name": "BaseBdev2", 00:07:44.018 "uuid": "a7be40c5-0568-4bf1-96a4-5028cfb7191b", 00:07:44.018 "is_configured": true, 00:07:44.018 "data_offset": 0, 00:07:44.018 "data_size": 65536 00:07:44.018 } 00:07:44.018 ] 00:07:44.018 }' 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.018 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.587 [2024-11-19 10:01:58.681568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.587 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.587 "name": "Existed_Raid", 00:07:44.587 "aliases": [ 00:07:44.587 "61d0b73d-bdd5-47af-9447-a1a996f74bf7" 00:07:44.587 ], 00:07:44.587 "product_name": "Raid Volume", 00:07:44.587 "block_size": 512, 00:07:44.587 "num_blocks": 131072, 00:07:44.587 "uuid": "61d0b73d-bdd5-47af-9447-a1a996f74bf7", 00:07:44.587 "assigned_rate_limits": { 00:07:44.587 "rw_ios_per_sec": 0, 00:07:44.587 "rw_mbytes_per_sec": 0, 00:07:44.587 "r_mbytes_per_sec": 0, 00:07:44.587 "w_mbytes_per_sec": 0 00:07:44.587 }, 00:07:44.587 "claimed": false, 00:07:44.587 "zoned": false, 00:07:44.587 "supported_io_types": { 00:07:44.587 "read": true, 00:07:44.587 "write": true, 00:07:44.587 "unmap": true, 00:07:44.587 "flush": true, 00:07:44.587 "reset": true, 00:07:44.587 "nvme_admin": false, 00:07:44.587 "nvme_io": false, 00:07:44.587 "nvme_io_md": false, 00:07:44.587 "write_zeroes": true, 00:07:44.587 "zcopy": false, 00:07:44.587 "get_zone_info": false, 00:07:44.587 "zone_management": false, 00:07:44.587 "zone_append": false, 00:07:44.587 "compare": false, 00:07:44.587 "compare_and_write": false, 00:07:44.587 "abort": false, 00:07:44.587 "seek_hole": false, 00:07:44.587 "seek_data": false, 00:07:44.587 "copy": false, 00:07:44.587 "nvme_iov_md": false 00:07:44.587 }, 00:07:44.587 "memory_domains": [ 00:07:44.587 { 00:07:44.587 "dma_device_id": "system", 00:07:44.587 "dma_device_type": 1 00:07:44.587 }, 00:07:44.587 { 00:07:44.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.587 "dma_device_type": 2 00:07:44.587 }, 00:07:44.587 { 00:07:44.587 "dma_device_id": "system", 00:07:44.587 "dma_device_type": 1 00:07:44.587 }, 00:07:44.587 { 00:07:44.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.587 "dma_device_type": 2 00:07:44.587 } 00:07:44.587 ], 00:07:44.587 "driver_specific": { 00:07:44.587 "raid": { 00:07:44.587 "uuid": "61d0b73d-bdd5-47af-9447-a1a996f74bf7", 00:07:44.587 "strip_size_kb": 64, 00:07:44.587 "state": "online", 00:07:44.587 "raid_level": "concat", 00:07:44.587 "superblock": false, 00:07:44.587 "num_base_bdevs": 2, 00:07:44.587 "num_base_bdevs_discovered": 2, 00:07:44.587 "num_base_bdevs_operational": 2, 00:07:44.587 "base_bdevs_list": [ 00:07:44.587 { 00:07:44.587 "name": "BaseBdev1", 00:07:44.587 "uuid": "c7883320-7367-42ec-a23a-dd3cf54bcd11", 00:07:44.587 "is_configured": true, 00:07:44.587 "data_offset": 0, 00:07:44.587 "data_size": 65536 00:07:44.587 }, 00:07:44.587 { 00:07:44.587 "name": "BaseBdev2", 00:07:44.587 "uuid": "a7be40c5-0568-4bf1-96a4-5028cfb7191b", 00:07:44.587 "is_configured": true, 00:07:44.587 "data_offset": 0, 00:07:44.587 "data_size": 65536 00:07:44.587 } 00:07:44.587 ] 00:07:44.587 } 00:07:44.587 } 00:07:44.587 }' 00:07:44.588 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.588 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:44.588 BaseBdev2' 00:07:44.588 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.847 10:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.847 [2024-11-19 10:01:58.965731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.847 [2024-11-19 10:01:58.965840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.847 [2024-11-19 10:01:58.965939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.847 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.106 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.106 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.106 "name": "Existed_Raid", 00:07:45.106 "uuid": "61d0b73d-bdd5-47af-9447-a1a996f74bf7", 00:07:45.106 "strip_size_kb": 64, 00:07:45.106 "state": "offline", 00:07:45.106 "raid_level": "concat", 00:07:45.106 "superblock": false, 00:07:45.106 "num_base_bdevs": 2, 00:07:45.106 "num_base_bdevs_discovered": 1, 00:07:45.106 "num_base_bdevs_operational": 1, 00:07:45.106 "base_bdevs_list": [ 00:07:45.106 { 00:07:45.106 "name": null, 00:07:45.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.106 "is_configured": false, 00:07:45.106 "data_offset": 0, 00:07:45.106 "data_size": 65536 00:07:45.106 }, 00:07:45.106 { 00:07:45.106 "name": "BaseBdev2", 00:07:45.106 "uuid": "a7be40c5-0568-4bf1-96a4-5028cfb7191b", 00:07:45.106 "is_configured": true, 00:07:45.106 "data_offset": 0, 00:07:45.106 "data_size": 65536 00:07:45.106 } 00:07:45.106 ] 00:07:45.106 }' 00:07:45.106 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.106 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.671 [2024-11-19 10:01:59.668793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:45.671 [2024-11-19 10:01:59.668900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.671 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61489 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61489 ']' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61489 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61489 00:07:45.672 killing process with pid 61489 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61489' 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61489 00:07:45.672 [2024-11-19 10:01:59.859568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.672 10:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61489 00:07:45.672 [2024-11-19 10:01:59.875961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:47.083 00:07:47.083 real 0m5.742s 00:07:47.083 user 0m8.552s 00:07:47.083 sys 0m0.869s 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.083 ************************************ 00:07:47.083 END TEST raid_state_function_test 00:07:47.083 ************************************ 00:07:47.083 10:02:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:47.083 10:02:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.083 10:02:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.083 10:02:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.083 ************************************ 00:07:47.083 START TEST raid_state_function_test_sb 00:07:47.083 ************************************ 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:47.083 Process raid pid: 61742 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61742 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61742' 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61742 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61742 ']' 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.083 10:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.083 [2024-11-19 10:02:01.226619] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:47.083 [2024-11-19 10:02:01.227015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.341 [2024-11-19 10:02:01.414653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.341 [2024-11-19 10:02:01.572964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.905 [2024-11-19 10:02:01.831890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.905 [2024-11-19 10:02:01.831959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.166 [2024-11-19 10:02:02.173645] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.166 [2024-11-19 10:02:02.173731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.166 [2024-11-19 10:02:02.173753] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.166 [2024-11-19 10:02:02.173774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.166 "name": "Existed_Raid", 00:07:48.166 "uuid": "6c3456fe-946a-45e4-8f8e-f2934fa5605a", 00:07:48.166 "strip_size_kb": 64, 00:07:48.166 "state": "configuring", 00:07:48.166 "raid_level": "concat", 00:07:48.166 "superblock": true, 00:07:48.166 "num_base_bdevs": 2, 00:07:48.166 "num_base_bdevs_discovered": 0, 00:07:48.166 "num_base_bdevs_operational": 2, 00:07:48.166 "base_bdevs_list": [ 00:07:48.166 { 00:07:48.166 "name": "BaseBdev1", 00:07:48.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.166 "is_configured": false, 00:07:48.166 "data_offset": 0, 00:07:48.166 "data_size": 0 00:07:48.166 }, 00:07:48.166 { 00:07:48.166 "name": "BaseBdev2", 00:07:48.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.166 "is_configured": false, 00:07:48.166 "data_offset": 0, 00:07:48.166 "data_size": 0 00:07:48.166 } 00:07:48.166 ] 00:07:48.166 }' 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.166 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.735 [2024-11-19 10:02:02.737731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.735 [2024-11-19 10:02:02.737951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.735 [2024-11-19 10:02:02.745674] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.735 [2024-11-19 10:02:02.745918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.735 [2024-11-19 10:02:02.745952] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.735 [2024-11-19 10:02:02.745975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.735 [2024-11-19 10:02:02.797451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.735 BaseBdev1 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.735 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.735 [ 00:07:48.735 { 00:07:48.735 "name": "BaseBdev1", 00:07:48.735 "aliases": [ 00:07:48.735 "f6d099b3-ebe9-4417-9197-66b394e99d25" 00:07:48.735 ], 00:07:48.735 "product_name": "Malloc disk", 00:07:48.735 "block_size": 512, 00:07:48.735 "num_blocks": 65536, 00:07:48.735 "uuid": "f6d099b3-ebe9-4417-9197-66b394e99d25", 00:07:48.735 "assigned_rate_limits": { 00:07:48.735 "rw_ios_per_sec": 0, 00:07:48.735 "rw_mbytes_per_sec": 0, 00:07:48.735 "r_mbytes_per_sec": 0, 00:07:48.735 "w_mbytes_per_sec": 0 00:07:48.735 }, 00:07:48.735 "claimed": true, 00:07:48.735 "claim_type": "exclusive_write", 00:07:48.735 "zoned": false, 00:07:48.736 "supported_io_types": { 00:07:48.736 "read": true, 00:07:48.736 "write": true, 00:07:48.736 "unmap": true, 00:07:48.736 "flush": true, 00:07:48.736 "reset": true, 00:07:48.736 "nvme_admin": false, 00:07:48.736 "nvme_io": false, 00:07:48.736 "nvme_io_md": false, 00:07:48.736 "write_zeroes": true, 00:07:48.736 "zcopy": true, 00:07:48.736 "get_zone_info": false, 00:07:48.736 "zone_management": false, 00:07:48.736 "zone_append": false, 00:07:48.736 "compare": false, 00:07:48.736 "compare_and_write": false, 00:07:48.736 "abort": true, 00:07:48.736 "seek_hole": false, 00:07:48.736 "seek_data": false, 00:07:48.736 "copy": true, 00:07:48.736 "nvme_iov_md": false 00:07:48.736 }, 00:07:48.736 "memory_domains": [ 00:07:48.736 { 00:07:48.736 "dma_device_id": "system", 00:07:48.736 "dma_device_type": 1 00:07:48.736 }, 00:07:48.736 { 00:07:48.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.736 "dma_device_type": 2 00:07:48.736 } 00:07:48.736 ], 00:07:48.736 "driver_specific": {} 00:07:48.736 } 00:07:48.736 ] 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.736 "name": "Existed_Raid", 00:07:48.736 "uuid": "e547ff0b-7abd-45f6-8d32-2e2391ec160d", 00:07:48.736 "strip_size_kb": 64, 00:07:48.736 "state": "configuring", 00:07:48.736 "raid_level": "concat", 00:07:48.736 "superblock": true, 00:07:48.736 "num_base_bdevs": 2, 00:07:48.736 "num_base_bdevs_discovered": 1, 00:07:48.736 "num_base_bdevs_operational": 2, 00:07:48.736 "base_bdevs_list": [ 00:07:48.736 { 00:07:48.736 "name": "BaseBdev1", 00:07:48.736 "uuid": "f6d099b3-ebe9-4417-9197-66b394e99d25", 00:07:48.736 "is_configured": true, 00:07:48.736 "data_offset": 2048, 00:07:48.736 "data_size": 63488 00:07:48.736 }, 00:07:48.736 { 00:07:48.736 "name": "BaseBdev2", 00:07:48.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.736 "is_configured": false, 00:07:48.736 "data_offset": 0, 00:07:48.736 "data_size": 0 00:07:48.736 } 00:07:48.736 ] 00:07:48.736 }' 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.736 10:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.304 [2024-11-19 10:02:03.317640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.304 [2024-11-19 10:02:03.317863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.304 [2024-11-19 10:02:03.325692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.304 [2024-11-19 10:02:03.328523] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.304 [2024-11-19 10:02:03.328582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.304 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.305 "name": "Existed_Raid", 00:07:49.305 "uuid": "e59a9de0-eca4-46b8-92f5-ac1271cceecc", 00:07:49.305 "strip_size_kb": 64, 00:07:49.305 "state": "configuring", 00:07:49.305 "raid_level": "concat", 00:07:49.305 "superblock": true, 00:07:49.305 "num_base_bdevs": 2, 00:07:49.305 "num_base_bdevs_discovered": 1, 00:07:49.305 "num_base_bdevs_operational": 2, 00:07:49.305 "base_bdevs_list": [ 00:07:49.305 { 00:07:49.305 "name": "BaseBdev1", 00:07:49.305 "uuid": "f6d099b3-ebe9-4417-9197-66b394e99d25", 00:07:49.305 "is_configured": true, 00:07:49.305 "data_offset": 2048, 00:07:49.305 "data_size": 63488 00:07:49.305 }, 00:07:49.305 { 00:07:49.305 "name": "BaseBdev2", 00:07:49.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.305 "is_configured": false, 00:07:49.305 "data_offset": 0, 00:07:49.305 "data_size": 0 00:07:49.305 } 00:07:49.305 ] 00:07:49.305 }' 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.305 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.873 [2024-11-19 10:02:03.880094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.873 [2024-11-19 10:02:03.880902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.873 [2024-11-19 10:02:03.880931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:49.873 BaseBdev2 00:07:49.873 [2024-11-19 10:02:03.881330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.873 [2024-11-19 10:02:03.881567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.873 [2024-11-19 10:02:03.881598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:49.873 [2024-11-19 10:02:03.881791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:49.873 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.874 [ 00:07:49.874 { 00:07:49.874 "name": "BaseBdev2", 00:07:49.874 "aliases": [ 00:07:49.874 "3529a71d-248c-46c6-ac33-d4b2c550d0b8" 00:07:49.874 ], 00:07:49.874 "product_name": "Malloc disk", 00:07:49.874 "block_size": 512, 00:07:49.874 "num_blocks": 65536, 00:07:49.874 "uuid": "3529a71d-248c-46c6-ac33-d4b2c550d0b8", 00:07:49.874 "assigned_rate_limits": { 00:07:49.874 "rw_ios_per_sec": 0, 00:07:49.874 "rw_mbytes_per_sec": 0, 00:07:49.874 "r_mbytes_per_sec": 0, 00:07:49.874 "w_mbytes_per_sec": 0 00:07:49.874 }, 00:07:49.874 "claimed": true, 00:07:49.874 "claim_type": "exclusive_write", 00:07:49.874 "zoned": false, 00:07:49.874 "supported_io_types": { 00:07:49.874 "read": true, 00:07:49.874 "write": true, 00:07:49.874 "unmap": true, 00:07:49.874 "flush": true, 00:07:49.874 "reset": true, 00:07:49.874 "nvme_admin": false, 00:07:49.874 "nvme_io": false, 00:07:49.874 "nvme_io_md": false, 00:07:49.874 "write_zeroes": true, 00:07:49.874 "zcopy": true, 00:07:49.874 "get_zone_info": false, 00:07:49.874 "zone_management": false, 00:07:49.874 "zone_append": false, 00:07:49.874 "compare": false, 00:07:49.874 "compare_and_write": false, 00:07:49.874 "abort": true, 00:07:49.874 "seek_hole": false, 00:07:49.874 "seek_data": false, 00:07:49.874 "copy": true, 00:07:49.874 "nvme_iov_md": false 00:07:49.874 }, 00:07:49.874 "memory_domains": [ 00:07:49.874 { 00:07:49.874 "dma_device_id": "system", 00:07:49.874 "dma_device_type": 1 00:07:49.874 }, 00:07:49.874 { 00:07:49.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.874 "dma_device_type": 2 00:07:49.874 } 00:07:49.874 ], 00:07:49.874 "driver_specific": {} 00:07:49.874 } 00:07:49.874 ] 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.874 "name": "Existed_Raid", 00:07:49.874 "uuid": "e59a9de0-eca4-46b8-92f5-ac1271cceecc", 00:07:49.874 "strip_size_kb": 64, 00:07:49.874 "state": "online", 00:07:49.874 "raid_level": "concat", 00:07:49.874 "superblock": true, 00:07:49.874 "num_base_bdevs": 2, 00:07:49.874 "num_base_bdevs_discovered": 2, 00:07:49.874 "num_base_bdevs_operational": 2, 00:07:49.874 "base_bdevs_list": [ 00:07:49.874 { 00:07:49.874 "name": "BaseBdev1", 00:07:49.874 "uuid": "f6d099b3-ebe9-4417-9197-66b394e99d25", 00:07:49.874 "is_configured": true, 00:07:49.874 "data_offset": 2048, 00:07:49.874 "data_size": 63488 00:07:49.874 }, 00:07:49.874 { 00:07:49.874 "name": "BaseBdev2", 00:07:49.874 "uuid": "3529a71d-248c-46c6-ac33-d4b2c550d0b8", 00:07:49.874 "is_configured": true, 00:07:49.874 "data_offset": 2048, 00:07:49.874 "data_size": 63488 00:07:49.874 } 00:07:49.874 ] 00:07:49.874 }' 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.874 10:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.442 [2024-11-19 10:02:04.444708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.442 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.442 "name": "Existed_Raid", 00:07:50.442 "aliases": [ 00:07:50.442 "e59a9de0-eca4-46b8-92f5-ac1271cceecc" 00:07:50.442 ], 00:07:50.442 "product_name": "Raid Volume", 00:07:50.442 "block_size": 512, 00:07:50.442 "num_blocks": 126976, 00:07:50.442 "uuid": "e59a9de0-eca4-46b8-92f5-ac1271cceecc", 00:07:50.442 "assigned_rate_limits": { 00:07:50.442 "rw_ios_per_sec": 0, 00:07:50.442 "rw_mbytes_per_sec": 0, 00:07:50.442 "r_mbytes_per_sec": 0, 00:07:50.442 "w_mbytes_per_sec": 0 00:07:50.442 }, 00:07:50.442 "claimed": false, 00:07:50.442 "zoned": false, 00:07:50.442 "supported_io_types": { 00:07:50.442 "read": true, 00:07:50.442 "write": true, 00:07:50.442 "unmap": true, 00:07:50.442 "flush": true, 00:07:50.442 "reset": true, 00:07:50.442 "nvme_admin": false, 00:07:50.442 "nvme_io": false, 00:07:50.442 "nvme_io_md": false, 00:07:50.442 "write_zeroes": true, 00:07:50.442 "zcopy": false, 00:07:50.442 "get_zone_info": false, 00:07:50.442 "zone_management": false, 00:07:50.442 "zone_append": false, 00:07:50.442 "compare": false, 00:07:50.442 "compare_and_write": false, 00:07:50.442 "abort": false, 00:07:50.442 "seek_hole": false, 00:07:50.442 "seek_data": false, 00:07:50.442 "copy": false, 00:07:50.442 "nvme_iov_md": false 00:07:50.442 }, 00:07:50.442 "memory_domains": [ 00:07:50.442 { 00:07:50.442 "dma_device_id": "system", 00:07:50.442 "dma_device_type": 1 00:07:50.442 }, 00:07:50.442 { 00:07:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.442 "dma_device_type": 2 00:07:50.442 }, 00:07:50.442 { 00:07:50.442 "dma_device_id": "system", 00:07:50.442 "dma_device_type": 1 00:07:50.442 }, 00:07:50.442 { 00:07:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.442 "dma_device_type": 2 00:07:50.442 } 00:07:50.442 ], 00:07:50.442 "driver_specific": { 00:07:50.442 "raid": { 00:07:50.442 "uuid": "e59a9de0-eca4-46b8-92f5-ac1271cceecc", 00:07:50.442 "strip_size_kb": 64, 00:07:50.442 "state": "online", 00:07:50.442 "raid_level": "concat", 00:07:50.442 "superblock": true, 00:07:50.442 "num_base_bdevs": 2, 00:07:50.442 "num_base_bdevs_discovered": 2, 00:07:50.442 "num_base_bdevs_operational": 2, 00:07:50.442 "base_bdevs_list": [ 00:07:50.442 { 00:07:50.442 "name": "BaseBdev1", 00:07:50.442 "uuid": "f6d099b3-ebe9-4417-9197-66b394e99d25", 00:07:50.442 "is_configured": true, 00:07:50.442 "data_offset": 2048, 00:07:50.443 "data_size": 63488 00:07:50.443 }, 00:07:50.443 { 00:07:50.443 "name": "BaseBdev2", 00:07:50.443 "uuid": "3529a71d-248c-46c6-ac33-d4b2c550d0b8", 00:07:50.443 "is_configured": true, 00:07:50.443 "data_offset": 2048, 00:07:50.443 "data_size": 63488 00:07:50.443 } 00:07:50.443 ] 00:07:50.443 } 00:07:50.443 } 00:07:50.443 }' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.443 BaseBdev2' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.443 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 [2024-11-19 10:02:04.680434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.702 [2024-11-19 10:02:04.680484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.702 [2024-11-19 10:02:04.680595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.702 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.702 "name": "Existed_Raid", 00:07:50.702 "uuid": "e59a9de0-eca4-46b8-92f5-ac1271cceecc", 00:07:50.702 "strip_size_kb": 64, 00:07:50.702 "state": "offline", 00:07:50.702 "raid_level": "concat", 00:07:50.702 "superblock": true, 00:07:50.702 "num_base_bdevs": 2, 00:07:50.702 "num_base_bdevs_discovered": 1, 00:07:50.702 "num_base_bdevs_operational": 1, 00:07:50.702 "base_bdevs_list": [ 00:07:50.702 { 00:07:50.702 "name": null, 00:07:50.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.702 "is_configured": false, 00:07:50.702 "data_offset": 0, 00:07:50.703 "data_size": 63488 00:07:50.703 }, 00:07:50.703 { 00:07:50.703 "name": "BaseBdev2", 00:07:50.703 "uuid": "3529a71d-248c-46c6-ac33-d4b2c550d0b8", 00:07:50.703 "is_configured": true, 00:07:50.703 "data_offset": 2048, 00:07:50.703 "data_size": 63488 00:07:50.703 } 00:07:50.703 ] 00:07:50.703 }' 00:07:50.703 10:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.703 10:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.270 [2024-11-19 10:02:05.347235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.270 [2024-11-19 10:02:05.347310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61742 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61742 ']' 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61742 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:51.270 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.530 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61742 00:07:51.530 killing process with pid 61742 00:07:51.530 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.530 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.530 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61742' 00:07:51.530 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61742 00:07:51.530 [2024-11-19 10:02:05.531122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.530 10:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61742 00:07:51.530 [2024-11-19 10:02:05.547688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.961 10:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.961 00:07:52.961 real 0m5.617s 00:07:52.961 user 0m8.316s 00:07:52.961 sys 0m0.844s 00:07:52.961 10:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.961 10:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 ************************************ 00:07:52.961 END TEST raid_state_function_test_sb 00:07:52.961 ************************************ 00:07:52.961 10:02:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:52.961 10:02:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:52.961 10:02:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.961 10:02:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 ************************************ 00:07:52.961 START TEST raid_superblock_test 00:07:52.961 ************************************ 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:52.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62005 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62005 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62005 ']' 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.961 10:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 [2024-11-19 10:02:06.873015] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:52.961 [2024-11-19 10:02:06.873192] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62005 ] 00:07:52.961 [2024-11-19 10:02:07.054611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.220 [2024-11-19 10:02:07.207379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.220 [2024-11-19 10:02:07.438052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.220 [2024-11-19 10:02:07.438264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.788 malloc1 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.788 [2024-11-19 10:02:07.984589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.788 [2024-11-19 10:02:07.984905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.788 [2024-11-19 10:02:07.985062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.788 [2024-11-19 10:02:07.985222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.788 [2024-11-19 10:02:07.988659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.788 [2024-11-19 10:02:07.988858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.788 pt1 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.788 10:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 malloc2 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 [2024-11-19 10:02:08.050674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.048 [2024-11-19 10:02:08.050942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.048 [2024-11-19 10:02:08.051026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.048 [2024-11-19 10:02:08.051260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.048 [2024-11-19 10:02:08.054652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.048 [2024-11-19 10:02:08.054870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.048 pt2 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 [2024-11-19 10:02:08.063249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.048 [2024-11-19 10:02:08.066176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.048 [2024-11-19 10:02:08.066408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:54.048 [2024-11-19 10:02:08.066429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.048 [2024-11-19 10:02:08.066748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.048 [2024-11-19 10:02:08.067127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:54.048 [2024-11-19 10:02:08.067272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:54.048 [2024-11-19 10:02:08.067719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.048 "name": "raid_bdev1", 00:07:54.048 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:54.048 "strip_size_kb": 64, 00:07:54.048 "state": "online", 00:07:54.048 "raid_level": "concat", 00:07:54.048 "superblock": true, 00:07:54.048 "num_base_bdevs": 2, 00:07:54.048 "num_base_bdevs_discovered": 2, 00:07:54.048 "num_base_bdevs_operational": 2, 00:07:54.048 "base_bdevs_list": [ 00:07:54.048 { 00:07:54.048 "name": "pt1", 00:07:54.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.048 "is_configured": true, 00:07:54.048 "data_offset": 2048, 00:07:54.048 "data_size": 63488 00:07:54.048 }, 00:07:54.048 { 00:07:54.048 "name": "pt2", 00:07:54.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.048 "is_configured": true, 00:07:54.048 "data_offset": 2048, 00:07:54.048 "data_size": 63488 00:07:54.048 } 00:07:54.048 ] 00:07:54.048 }' 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.048 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.616 [2024-11-19 10:02:08.612364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.616 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.616 "name": "raid_bdev1", 00:07:54.616 "aliases": [ 00:07:54.616 "432a713b-ee54-4882-9124-10d1a9807860" 00:07:54.616 ], 00:07:54.616 "product_name": "Raid Volume", 00:07:54.616 "block_size": 512, 00:07:54.616 "num_blocks": 126976, 00:07:54.616 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:54.616 "assigned_rate_limits": { 00:07:54.616 "rw_ios_per_sec": 0, 00:07:54.616 "rw_mbytes_per_sec": 0, 00:07:54.616 "r_mbytes_per_sec": 0, 00:07:54.616 "w_mbytes_per_sec": 0 00:07:54.616 }, 00:07:54.616 "claimed": false, 00:07:54.616 "zoned": false, 00:07:54.616 "supported_io_types": { 00:07:54.616 "read": true, 00:07:54.616 "write": true, 00:07:54.616 "unmap": true, 00:07:54.616 "flush": true, 00:07:54.616 "reset": true, 00:07:54.616 "nvme_admin": false, 00:07:54.616 "nvme_io": false, 00:07:54.616 "nvme_io_md": false, 00:07:54.616 "write_zeroes": true, 00:07:54.616 "zcopy": false, 00:07:54.616 "get_zone_info": false, 00:07:54.616 "zone_management": false, 00:07:54.616 "zone_append": false, 00:07:54.616 "compare": false, 00:07:54.616 "compare_and_write": false, 00:07:54.616 "abort": false, 00:07:54.616 "seek_hole": false, 00:07:54.616 "seek_data": false, 00:07:54.616 "copy": false, 00:07:54.616 "nvme_iov_md": false 00:07:54.616 }, 00:07:54.616 "memory_domains": [ 00:07:54.616 { 00:07:54.616 "dma_device_id": "system", 00:07:54.616 "dma_device_type": 1 00:07:54.616 }, 00:07:54.616 { 00:07:54.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.616 "dma_device_type": 2 00:07:54.616 }, 00:07:54.616 { 00:07:54.616 "dma_device_id": "system", 00:07:54.616 "dma_device_type": 1 00:07:54.616 }, 00:07:54.616 { 00:07:54.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.616 "dma_device_type": 2 00:07:54.616 } 00:07:54.616 ], 00:07:54.616 "driver_specific": { 00:07:54.616 "raid": { 00:07:54.616 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:54.616 "strip_size_kb": 64, 00:07:54.616 "state": "online", 00:07:54.616 "raid_level": "concat", 00:07:54.616 "superblock": true, 00:07:54.617 "num_base_bdevs": 2, 00:07:54.617 "num_base_bdevs_discovered": 2, 00:07:54.617 "num_base_bdevs_operational": 2, 00:07:54.617 "base_bdevs_list": [ 00:07:54.617 { 00:07:54.617 "name": "pt1", 00:07:54.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.617 "is_configured": true, 00:07:54.617 "data_offset": 2048, 00:07:54.617 "data_size": 63488 00:07:54.617 }, 00:07:54.617 { 00:07:54.617 "name": "pt2", 00:07:54.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.617 "is_configured": true, 00:07:54.617 "data_offset": 2048, 00:07:54.617 "data_size": 63488 00:07:54.617 } 00:07:54.617 ] 00:07:54.617 } 00:07:54.617 } 00:07:54.617 }' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.617 pt2' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.877 [2024-11-19 10:02:08.852268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=432a713b-ee54-4882-9124-10d1a9807860 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 432a713b-ee54-4882-9124-10d1a9807860 ']' 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 [2024-11-19 10:02:08.899922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.877 [2024-11-19 10:02:08.899952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.877 [2024-11-19 10:02:08.900061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.877 [2024-11-19 10:02:08.900143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.877 [2024-11-19 10:02:08.900167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 [2024-11-19 10:02:09.040037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.877 [2024-11-19 10:02:09.043085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.877 [2024-11-19 10:02:09.043219] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:54.877 [2024-11-19 10:02:09.043344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:54.877 [2024-11-19 10:02:09.043372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.877 [2024-11-19 10:02:09.043389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:54.877 request: 00:07:54.877 { 00:07:54.877 "name": "raid_bdev1", 00:07:54.877 "raid_level": "concat", 00:07:54.877 "base_bdevs": [ 00:07:54.877 "malloc1", 00:07:54.877 "malloc2" 00:07:54.877 ], 00:07:54.877 "strip_size_kb": 64, 00:07:54.877 "superblock": false, 00:07:54.877 "method": "bdev_raid_create", 00:07:54.877 "req_id": 1 00:07:54.877 } 00:07:54.877 Got JSON-RPC error response 00:07:54.877 response: 00:07:54.877 { 00:07:54.877 "code": -17, 00:07:54.877 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.877 } 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.877 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 [2024-11-19 10:02:09.108191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.877 [2024-11-19 10:02:09.108429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.877 [2024-11-19 10:02:09.108508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:54.877 [2024-11-19 10:02:09.108637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.136 [2024-11-19 10:02:09.112318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.136 [2024-11-19 10:02:09.112488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.136 [2024-11-19 10:02:09.112696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.136 [2024-11-19 10:02:09.112927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.136 pt1 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.136 "name": "raid_bdev1", 00:07:55.136 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:55.136 "strip_size_kb": 64, 00:07:55.136 "state": "configuring", 00:07:55.136 "raid_level": "concat", 00:07:55.136 "superblock": true, 00:07:55.136 "num_base_bdevs": 2, 00:07:55.136 "num_base_bdevs_discovered": 1, 00:07:55.136 "num_base_bdevs_operational": 2, 00:07:55.136 "base_bdevs_list": [ 00:07:55.136 { 00:07:55.136 "name": "pt1", 00:07:55.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.136 "is_configured": true, 00:07:55.136 "data_offset": 2048, 00:07:55.136 "data_size": 63488 00:07:55.136 }, 00:07:55.136 { 00:07:55.136 "name": null, 00:07:55.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.136 "is_configured": false, 00:07:55.136 "data_offset": 2048, 00:07:55.136 "data_size": 63488 00:07:55.136 } 00:07:55.136 ] 00:07:55.136 }' 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.136 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.703 [2024-11-19 10:02:09.717101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.703 [2024-11-19 10:02:09.717259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.703 [2024-11-19 10:02:09.717295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.703 [2024-11-19 10:02:09.717314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.703 [2024-11-19 10:02:09.718004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.703 [2024-11-19 10:02:09.718044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.703 [2024-11-19 10:02:09.718176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.703 [2024-11-19 10:02:09.718232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.703 [2024-11-19 10:02:09.718389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.703 [2024-11-19 10:02:09.718420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.703 [2024-11-19 10:02:09.718762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:55.703 [2024-11-19 10:02:09.719003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.703 [2024-11-19 10:02:09.719078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.703 [2024-11-19 10:02:09.719291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.703 pt2 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.703 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.704 "name": "raid_bdev1", 00:07:55.704 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:55.704 "strip_size_kb": 64, 00:07:55.704 "state": "online", 00:07:55.704 "raid_level": "concat", 00:07:55.704 "superblock": true, 00:07:55.704 "num_base_bdevs": 2, 00:07:55.704 "num_base_bdevs_discovered": 2, 00:07:55.704 "num_base_bdevs_operational": 2, 00:07:55.704 "base_bdevs_list": [ 00:07:55.704 { 00:07:55.704 "name": "pt1", 00:07:55.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.704 "is_configured": true, 00:07:55.704 "data_offset": 2048, 00:07:55.704 "data_size": 63488 00:07:55.704 }, 00:07:55.704 { 00:07:55.704 "name": "pt2", 00:07:55.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.704 "is_configured": true, 00:07:55.704 "data_offset": 2048, 00:07:55.704 "data_size": 63488 00:07:55.704 } 00:07:55.704 ] 00:07:55.704 }' 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.704 10:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.271 [2024-11-19 10:02:10.245607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.271 "name": "raid_bdev1", 00:07:56.271 "aliases": [ 00:07:56.271 "432a713b-ee54-4882-9124-10d1a9807860" 00:07:56.271 ], 00:07:56.271 "product_name": "Raid Volume", 00:07:56.271 "block_size": 512, 00:07:56.271 "num_blocks": 126976, 00:07:56.271 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:56.271 "assigned_rate_limits": { 00:07:56.271 "rw_ios_per_sec": 0, 00:07:56.271 "rw_mbytes_per_sec": 0, 00:07:56.271 "r_mbytes_per_sec": 0, 00:07:56.271 "w_mbytes_per_sec": 0 00:07:56.271 }, 00:07:56.271 "claimed": false, 00:07:56.271 "zoned": false, 00:07:56.271 "supported_io_types": { 00:07:56.271 "read": true, 00:07:56.271 "write": true, 00:07:56.271 "unmap": true, 00:07:56.271 "flush": true, 00:07:56.271 "reset": true, 00:07:56.271 "nvme_admin": false, 00:07:56.271 "nvme_io": false, 00:07:56.271 "nvme_io_md": false, 00:07:56.271 "write_zeroes": true, 00:07:56.271 "zcopy": false, 00:07:56.271 "get_zone_info": false, 00:07:56.271 "zone_management": false, 00:07:56.271 "zone_append": false, 00:07:56.271 "compare": false, 00:07:56.271 "compare_and_write": false, 00:07:56.271 "abort": false, 00:07:56.271 "seek_hole": false, 00:07:56.271 "seek_data": false, 00:07:56.271 "copy": false, 00:07:56.271 "nvme_iov_md": false 00:07:56.271 }, 00:07:56.271 "memory_domains": [ 00:07:56.271 { 00:07:56.271 "dma_device_id": "system", 00:07:56.271 "dma_device_type": 1 00:07:56.271 }, 00:07:56.271 { 00:07:56.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.271 "dma_device_type": 2 00:07:56.271 }, 00:07:56.271 { 00:07:56.271 "dma_device_id": "system", 00:07:56.271 "dma_device_type": 1 00:07:56.271 }, 00:07:56.271 { 00:07:56.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.271 "dma_device_type": 2 00:07:56.271 } 00:07:56.271 ], 00:07:56.271 "driver_specific": { 00:07:56.271 "raid": { 00:07:56.271 "uuid": "432a713b-ee54-4882-9124-10d1a9807860", 00:07:56.271 "strip_size_kb": 64, 00:07:56.271 "state": "online", 00:07:56.271 "raid_level": "concat", 00:07:56.271 "superblock": true, 00:07:56.271 "num_base_bdevs": 2, 00:07:56.271 "num_base_bdevs_discovered": 2, 00:07:56.271 "num_base_bdevs_operational": 2, 00:07:56.271 "base_bdevs_list": [ 00:07:56.271 { 00:07:56.271 "name": "pt1", 00:07:56.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.271 "is_configured": true, 00:07:56.271 "data_offset": 2048, 00:07:56.271 "data_size": 63488 00:07:56.271 }, 00:07:56.271 { 00:07:56.271 "name": "pt2", 00:07:56.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.271 "is_configured": true, 00:07:56.271 "data_offset": 2048, 00:07:56.271 "data_size": 63488 00:07:56.271 } 00:07:56.271 ] 00:07:56.271 } 00:07:56.271 } 00:07:56.271 }' 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.271 pt2' 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.271 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.272 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.531 [2024-11-19 10:02:10.509596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 432a713b-ee54-4882-9124-10d1a9807860 '!=' 432a713b-ee54-4882-9124-10d1a9807860 ']' 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62005 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62005 ']' 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62005 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62005 00:07:56.531 killing process with pid 62005 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62005' 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62005 00:07:56.531 [2024-11-19 10:02:10.586267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.531 10:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62005 00:07:56.531 [2024-11-19 10:02:10.586399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.531 [2024-11-19 10:02:10.586477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.531 [2024-11-19 10:02:10.586507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.790 [2024-11-19 10:02:10.793718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.167 10:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:58.167 00:07:58.167 real 0m5.210s 00:07:58.167 user 0m7.533s 00:07:58.167 sys 0m0.837s 00:07:58.167 10:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.167 10:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.167 ************************************ 00:07:58.167 END TEST raid_superblock_test 00:07:58.167 ************************************ 00:07:58.167 10:02:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:58.167 10:02:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.167 10:02:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.167 10:02:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.167 ************************************ 00:07:58.167 START TEST raid_read_error_test 00:07:58.167 ************************************ 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PvpdWEFjEm 00:07:58.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62222 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62222 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62222 ']' 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.167 10:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.167 [2024-11-19 10:02:12.182936] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:07:58.167 [2024-11-19 10:02:12.183412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62222 ] 00:07:58.167 [2024-11-19 10:02:12.387722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.426 [2024-11-19 10:02:12.559961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.684 [2024-11-19 10:02:12.811728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.684 [2024-11-19 10:02:12.812162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.942 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.942 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.942 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.942 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.942 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.942 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 BaseBdev1_malloc 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 true 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 [2024-11-19 10:02:13.226890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:59.201 [2024-11-19 10:02:13.227036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.201 [2024-11-19 10:02:13.227089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:59.201 [2024-11-19 10:02:13.227108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.201 [2024-11-19 10:02:13.230536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.201 [2024-11-19 10:02:13.230600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:59.201 BaseBdev1 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 BaseBdev2_malloc 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 true 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 [2024-11-19 10:02:13.295053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:59.201 [2024-11-19 10:02:13.295157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.201 [2024-11-19 10:02:13.295183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:59.201 [2024-11-19 10:02:13.295211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.201 [2024-11-19 10:02:13.298404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.201 [2024-11-19 10:02:13.298636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:59.201 BaseBdev2 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.201 [2024-11-19 10:02:13.307314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.201 [2024-11-19 10:02:13.310261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.201 [2024-11-19 10:02:13.310544] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.201 [2024-11-19 10:02:13.310582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:59.201 [2024-11-19 10:02:13.310955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:59.201 [2024-11-19 10:02:13.311234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.201 [2024-11-19 10:02:13.311254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.201 [2024-11-19 10:02:13.311507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.201 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.202 "name": "raid_bdev1", 00:07:59.202 "uuid": "2fecb261-4fc0-4958-a7d0-005df2d2f02c", 00:07:59.202 "strip_size_kb": 64, 00:07:59.202 "state": "online", 00:07:59.202 "raid_level": "concat", 00:07:59.202 "superblock": true, 00:07:59.202 "num_base_bdevs": 2, 00:07:59.202 "num_base_bdevs_discovered": 2, 00:07:59.202 "num_base_bdevs_operational": 2, 00:07:59.202 "base_bdevs_list": [ 00:07:59.202 { 00:07:59.202 "name": "BaseBdev1", 00:07:59.202 "uuid": "ef4df054-dacc-5a4f-8720-f4006ada33cb", 00:07:59.202 "is_configured": true, 00:07:59.202 "data_offset": 2048, 00:07:59.202 "data_size": 63488 00:07:59.202 }, 00:07:59.202 { 00:07:59.202 "name": "BaseBdev2", 00:07:59.202 "uuid": "f049c1bd-c68e-5b0a-b174-ac1d8f8ae114", 00:07:59.202 "is_configured": true, 00:07:59.202 "data_offset": 2048, 00:07:59.202 "data_size": 63488 00:07:59.202 } 00:07:59.202 ] 00:07:59.202 }' 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.202 10:02:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.769 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.769 10:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.769 [2024-11-19 10:02:13.989435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.705 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.705 "name": "raid_bdev1", 00:08:00.705 "uuid": "2fecb261-4fc0-4958-a7d0-005df2d2f02c", 00:08:00.705 "strip_size_kb": 64, 00:08:00.705 "state": "online", 00:08:00.705 "raid_level": "concat", 00:08:00.705 "superblock": true, 00:08:00.705 "num_base_bdevs": 2, 00:08:00.705 "num_base_bdevs_discovered": 2, 00:08:00.705 "num_base_bdevs_operational": 2, 00:08:00.705 "base_bdevs_list": [ 00:08:00.705 { 00:08:00.705 "name": "BaseBdev1", 00:08:00.705 "uuid": "ef4df054-dacc-5a4f-8720-f4006ada33cb", 00:08:00.705 "is_configured": true, 00:08:00.705 "data_offset": 2048, 00:08:00.705 "data_size": 63488 00:08:00.705 }, 00:08:00.705 { 00:08:00.705 "name": "BaseBdev2", 00:08:00.705 "uuid": "f049c1bd-c68e-5b0a-b174-ac1d8f8ae114", 00:08:00.705 "is_configured": true, 00:08:00.705 "data_offset": 2048, 00:08:00.705 "data_size": 63488 00:08:00.705 } 00:08:00.705 ] 00:08:00.705 }' 00:08:00.706 10:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.706 10:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.273 [2024-11-19 10:02:15.388466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.273 [2024-11-19 10:02:15.388729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.273 { 00:08:01.273 "results": [ 00:08:01.273 { 00:08:01.273 "job": "raid_bdev1", 00:08:01.273 "core_mask": "0x1", 00:08:01.273 "workload": "randrw", 00:08:01.273 "percentage": 50, 00:08:01.273 "status": "finished", 00:08:01.273 "queue_depth": 1, 00:08:01.273 "io_size": 131072, 00:08:01.273 "runtime": 1.396299, 00:08:01.273 "iops": 9883.269987302147, 00:08:01.273 "mibps": 1235.4087484127683, 00:08:01.273 "io_failed": 1, 00:08:01.273 "io_timeout": 0, 00:08:01.273 "avg_latency_us": 142.28925387488388, 00:08:01.273 "min_latency_us": 38.4, 00:08:01.273 "max_latency_us": 1846.9236363636364 00:08:01.273 } 00:08:01.273 ], 00:08:01.273 "core_count": 1 00:08:01.273 } 00:08:01.273 [2024-11-19 10:02:15.392355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.273 [2024-11-19 10:02:15.392474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.273 [2024-11-19 10:02:15.392566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.273 [2024-11-19 10:02:15.392604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62222 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62222 ']' 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62222 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62222 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.273 killing process with pid 62222 00:08:01.273 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62222' 00:08:01.274 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62222 00:08:01.274 [2024-11-19 10:02:15.444647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.274 10:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62222 00:08:01.532 [2024-11-19 10:02:15.573934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PvpdWEFjEm 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:02.909 00:08:02.909 real 0m4.753s 00:08:02.909 user 0m5.868s 00:08:02.909 sys 0m0.661s 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.909 ************************************ 00:08:02.909 END TEST raid_read_error_test 00:08:02.909 ************************************ 00:08:02.909 10:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 10:02:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:02.910 10:02:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.910 10:02:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.910 10:02:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 ************************************ 00:08:02.910 START TEST raid_write_error_test 00:08:02.910 ************************************ 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EJGBM4qOe0 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62368 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62368 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62368 ']' 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.910 10:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 [2024-11-19 10:02:16.981927] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:02.910 [2024-11-19 10:02:16.982123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62368 ] 00:08:03.201 [2024-11-19 10:02:17.174394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.201 [2024-11-19 10:02:17.339078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.498 [2024-11-19 10:02:17.577597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.498 [2024-11-19 10:02:17.577708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.756 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.756 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.756 10:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.756 10:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:03.756 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.756 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.756 BaseBdev1_malloc 00:08:04.015 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.015 10:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:04.015 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.015 10:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.015 true 00:08:04.015 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.015 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:04.015 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.015 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.015 [2024-11-19 10:02:18.004036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:04.015 [2024-11-19 10:02:18.004119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.016 [2024-11-19 10:02:18.004149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:04.016 [2024-11-19 10:02:18.004168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.016 [2024-11-19 10:02:18.007514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.016 [2024-11-19 10:02:18.007577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:04.016 BaseBdev1 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 BaseBdev2_malloc 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 true 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 [2024-11-19 10:02:18.075078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:04.016 [2024-11-19 10:02:18.075184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.016 [2024-11-19 10:02:18.075227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.016 [2024-11-19 10:02:18.075247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.016 [2024-11-19 10:02:18.078630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.016 [2024-11-19 10:02:18.078687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:04.016 BaseBdev2 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 [2024-11-19 10:02:18.087440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.016 [2024-11-19 10:02:18.090087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.016 [2024-11-19 10:02:18.090415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.016 [2024-11-19 10:02:18.090456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.016 [2024-11-19 10:02:18.090738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:04.016 [2024-11-19 10:02:18.091044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.016 [2024-11-19 10:02:18.091075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.016 [2024-11-19 10:02:18.091318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.016 "name": "raid_bdev1", 00:08:04.016 "uuid": "eea40817-324c-4e78-917a-b2b5baa3d350", 00:08:04.016 "strip_size_kb": 64, 00:08:04.016 "state": "online", 00:08:04.016 "raid_level": "concat", 00:08:04.016 "superblock": true, 00:08:04.016 "num_base_bdevs": 2, 00:08:04.016 "num_base_bdevs_discovered": 2, 00:08:04.016 "num_base_bdevs_operational": 2, 00:08:04.016 "base_bdevs_list": [ 00:08:04.016 { 00:08:04.016 "name": "BaseBdev1", 00:08:04.016 "uuid": "791026e1-4c47-5e33-9b2b-85705352b5ae", 00:08:04.016 "is_configured": true, 00:08:04.016 "data_offset": 2048, 00:08:04.016 "data_size": 63488 00:08:04.016 }, 00:08:04.016 { 00:08:04.016 "name": "BaseBdev2", 00:08:04.016 "uuid": "e8421c61-25f5-5b84-9e3a-41ca44154657", 00:08:04.016 "is_configured": true, 00:08:04.016 "data_offset": 2048, 00:08:04.016 "data_size": 63488 00:08:04.016 } 00:08:04.016 ] 00:08:04.016 }' 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.016 10:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.585 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:04.585 10:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:04.585 [2024-11-19 10:02:18.753281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.524 "name": "raid_bdev1", 00:08:05.524 "uuid": "eea40817-324c-4e78-917a-b2b5baa3d350", 00:08:05.524 "strip_size_kb": 64, 00:08:05.524 "state": "online", 00:08:05.524 "raid_level": "concat", 00:08:05.524 "superblock": true, 00:08:05.524 "num_base_bdevs": 2, 00:08:05.524 "num_base_bdevs_discovered": 2, 00:08:05.524 "num_base_bdevs_operational": 2, 00:08:05.524 "base_bdevs_list": [ 00:08:05.524 { 00:08:05.524 "name": "BaseBdev1", 00:08:05.524 "uuid": "791026e1-4c47-5e33-9b2b-85705352b5ae", 00:08:05.524 "is_configured": true, 00:08:05.524 "data_offset": 2048, 00:08:05.524 "data_size": 63488 00:08:05.524 }, 00:08:05.524 { 00:08:05.524 "name": "BaseBdev2", 00:08:05.524 "uuid": "e8421c61-25f5-5b84-9e3a-41ca44154657", 00:08:05.524 "is_configured": true, 00:08:05.524 "data_offset": 2048, 00:08:05.524 "data_size": 63488 00:08:05.524 } 00:08:05.524 ] 00:08:05.524 }' 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.524 10:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.093 [2024-11-19 10:02:20.175967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.093 [2024-11-19 10:02:20.176011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.093 [2024-11-19 10:02:20.179518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.093 [2024-11-19 10:02:20.179570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.093 [2024-11-19 10:02:20.179616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.093 [2024-11-19 10:02:20.179648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.093 { 00:08:06.093 "results": [ 00:08:06.093 { 00:08:06.093 "job": "raid_bdev1", 00:08:06.093 "core_mask": "0x1", 00:08:06.093 "workload": "randrw", 00:08:06.093 "percentage": 50, 00:08:06.093 "status": "finished", 00:08:06.093 "queue_depth": 1, 00:08:06.093 "io_size": 131072, 00:08:06.093 "runtime": 1.419951, 00:08:06.093 "iops": 9636.24801137504, 00:08:06.093 "mibps": 1204.53100142188, 00:08:06.093 "io_failed": 1, 00:08:06.093 "io_timeout": 0, 00:08:06.093 "avg_latency_us": 146.08667959926655, 00:08:06.093 "min_latency_us": 36.07272727272727, 00:08:06.093 "max_latency_us": 2025.658181818182 00:08:06.093 } 00:08:06.093 ], 00:08:06.093 "core_count": 1 00:08:06.093 } 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62368 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62368 ']' 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62368 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62368 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62368' 00:08:06.093 killing process with pid 62368 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62368 00:08:06.093 [2024-11-19 10:02:20.227693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.093 10:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62368 00:08:06.352 [2024-11-19 10:02:20.364958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EJGBM4qOe0 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:07.731 00:08:07.731 real 0m4.760s 00:08:07.731 user 0m5.839s 00:08:07.731 sys 0m0.683s 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.731 ************************************ 00:08:07.731 END TEST raid_write_error_test 00:08:07.731 ************************************ 00:08:07.731 10:02:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.731 10:02:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:07.731 10:02:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:07.731 10:02:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.731 10:02:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.731 10:02:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.731 ************************************ 00:08:07.731 START TEST raid_state_function_test 00:08:07.731 ************************************ 00:08:07.731 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:07.731 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:07.731 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:07.731 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:07.731 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.731 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:07.732 Process raid pid: 62511 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62511 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62511' 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62511 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62511 ']' 00:08:07.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.732 10:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.732 [2024-11-19 10:02:21.771330] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:07.732 [2024-11-19 10:02:21.771578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.732 [2024-11-19 10:02:21.952442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.990 [2024-11-19 10:02:22.112249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.250 [2024-11-19 10:02:22.354251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.250 [2024-11-19 10:02:22.354321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.818 [2024-11-19 10:02:22.795528] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.818 [2024-11-19 10:02:22.795620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.818 [2024-11-19 10:02:22.795648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.818 [2024-11-19 10:02:22.795674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.818 "name": "Existed_Raid", 00:08:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.818 "strip_size_kb": 0, 00:08:08.818 "state": "configuring", 00:08:08.818 "raid_level": "raid1", 00:08:08.818 "superblock": false, 00:08:08.818 "num_base_bdevs": 2, 00:08:08.818 "num_base_bdevs_discovered": 0, 00:08:08.818 "num_base_bdevs_operational": 2, 00:08:08.818 "base_bdevs_list": [ 00:08:08.818 { 00:08:08.818 "name": "BaseBdev1", 00:08:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.818 "is_configured": false, 00:08:08.818 "data_offset": 0, 00:08:08.818 "data_size": 0 00:08:08.818 }, 00:08:08.818 { 00:08:08.818 "name": "BaseBdev2", 00:08:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.818 "is_configured": false, 00:08:08.818 "data_offset": 0, 00:08:08.818 "data_size": 0 00:08:08.818 } 00:08:08.818 ] 00:08:08.818 }' 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.818 10:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.077 [2024-11-19 10:02:23.299674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.077 [2024-11-19 10:02:23.299726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.077 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.336 [2024-11-19 10:02:23.311630] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.336 [2024-11-19 10:02:23.311689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.336 [2024-11-19 10:02:23.311718] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.336 [2024-11-19 10:02:23.311738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.336 [2024-11-19 10:02:23.362857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.336 BaseBdev1 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.336 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.337 [ 00:08:09.337 { 00:08:09.337 "name": "BaseBdev1", 00:08:09.337 "aliases": [ 00:08:09.337 "a9e86903-0b9b-4baa-8830-766d83aec1ff" 00:08:09.337 ], 00:08:09.337 "product_name": "Malloc disk", 00:08:09.337 "block_size": 512, 00:08:09.337 "num_blocks": 65536, 00:08:09.337 "uuid": "a9e86903-0b9b-4baa-8830-766d83aec1ff", 00:08:09.337 "assigned_rate_limits": { 00:08:09.337 "rw_ios_per_sec": 0, 00:08:09.337 "rw_mbytes_per_sec": 0, 00:08:09.337 "r_mbytes_per_sec": 0, 00:08:09.337 "w_mbytes_per_sec": 0 00:08:09.337 }, 00:08:09.337 "claimed": true, 00:08:09.337 "claim_type": "exclusive_write", 00:08:09.337 "zoned": false, 00:08:09.337 "supported_io_types": { 00:08:09.337 "read": true, 00:08:09.337 "write": true, 00:08:09.337 "unmap": true, 00:08:09.337 "flush": true, 00:08:09.337 "reset": true, 00:08:09.337 "nvme_admin": false, 00:08:09.337 "nvme_io": false, 00:08:09.337 "nvme_io_md": false, 00:08:09.337 "write_zeroes": true, 00:08:09.337 "zcopy": true, 00:08:09.337 "get_zone_info": false, 00:08:09.337 "zone_management": false, 00:08:09.337 "zone_append": false, 00:08:09.337 "compare": false, 00:08:09.337 "compare_and_write": false, 00:08:09.337 "abort": true, 00:08:09.337 "seek_hole": false, 00:08:09.337 "seek_data": false, 00:08:09.337 "copy": true, 00:08:09.337 "nvme_iov_md": false 00:08:09.337 }, 00:08:09.337 "memory_domains": [ 00:08:09.337 { 00:08:09.337 "dma_device_id": "system", 00:08:09.337 "dma_device_type": 1 00:08:09.337 }, 00:08:09.337 { 00:08:09.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.337 "dma_device_type": 2 00:08:09.337 } 00:08:09.337 ], 00:08:09.337 "driver_specific": {} 00:08:09.337 } 00:08:09.337 ] 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.337 "name": "Existed_Raid", 00:08:09.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.337 "strip_size_kb": 0, 00:08:09.337 "state": "configuring", 00:08:09.337 "raid_level": "raid1", 00:08:09.337 "superblock": false, 00:08:09.337 "num_base_bdevs": 2, 00:08:09.337 "num_base_bdevs_discovered": 1, 00:08:09.337 "num_base_bdevs_operational": 2, 00:08:09.337 "base_bdevs_list": [ 00:08:09.337 { 00:08:09.337 "name": "BaseBdev1", 00:08:09.337 "uuid": "a9e86903-0b9b-4baa-8830-766d83aec1ff", 00:08:09.337 "is_configured": true, 00:08:09.337 "data_offset": 0, 00:08:09.337 "data_size": 65536 00:08:09.337 }, 00:08:09.337 { 00:08:09.337 "name": "BaseBdev2", 00:08:09.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.337 "is_configured": false, 00:08:09.337 "data_offset": 0, 00:08:09.337 "data_size": 0 00:08:09.337 } 00:08:09.337 ] 00:08:09.337 }' 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.337 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 [2024-11-19 10:02:23.939172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.905 [2024-11-19 10:02:23.939323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 [2024-11-19 10:02:23.951208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.905 [2024-11-19 10:02:23.954530] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.905 [2024-11-19 10:02:23.954727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 10:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.905 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.905 "name": "Existed_Raid", 00:08:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.905 "strip_size_kb": 0, 00:08:09.905 "state": "configuring", 00:08:09.905 "raid_level": "raid1", 00:08:09.905 "superblock": false, 00:08:09.905 "num_base_bdevs": 2, 00:08:09.905 "num_base_bdevs_discovered": 1, 00:08:09.905 "num_base_bdevs_operational": 2, 00:08:09.905 "base_bdevs_list": [ 00:08:09.905 { 00:08:09.905 "name": "BaseBdev1", 00:08:09.905 "uuid": "a9e86903-0b9b-4baa-8830-766d83aec1ff", 00:08:09.905 "is_configured": true, 00:08:09.905 "data_offset": 0, 00:08:09.905 "data_size": 65536 00:08:09.905 }, 00:08:09.905 { 00:08:09.905 "name": "BaseBdev2", 00:08:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.905 "is_configured": false, 00:08:09.905 "data_offset": 0, 00:08:09.905 "data_size": 0 00:08:09.905 } 00:08:09.905 ] 00:08:09.905 }' 00:08:09.905 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.905 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.508 [2024-11-19 10:02:24.531974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.508 [2024-11-19 10:02:24.532304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.508 [2024-11-19 10:02:24.532329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:10.508 [2024-11-19 10:02:24.532731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:10.508 [2024-11-19 10:02:24.532995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.508 [2024-11-19 10:02:24.533025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:10.508 [2024-11-19 10:02:24.533390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.508 BaseBdev2 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.508 [ 00:08:10.508 { 00:08:10.508 "name": "BaseBdev2", 00:08:10.508 "aliases": [ 00:08:10.508 "a5343d38-e456-468c-9a7b-9b0ab64e2096" 00:08:10.508 ], 00:08:10.508 "product_name": "Malloc disk", 00:08:10.508 "block_size": 512, 00:08:10.508 "num_blocks": 65536, 00:08:10.508 "uuid": "a5343d38-e456-468c-9a7b-9b0ab64e2096", 00:08:10.508 "assigned_rate_limits": { 00:08:10.508 "rw_ios_per_sec": 0, 00:08:10.508 "rw_mbytes_per_sec": 0, 00:08:10.508 "r_mbytes_per_sec": 0, 00:08:10.508 "w_mbytes_per_sec": 0 00:08:10.508 }, 00:08:10.508 "claimed": true, 00:08:10.508 "claim_type": "exclusive_write", 00:08:10.508 "zoned": false, 00:08:10.508 "supported_io_types": { 00:08:10.508 "read": true, 00:08:10.508 "write": true, 00:08:10.508 "unmap": true, 00:08:10.508 "flush": true, 00:08:10.508 "reset": true, 00:08:10.508 "nvme_admin": false, 00:08:10.508 "nvme_io": false, 00:08:10.508 "nvme_io_md": false, 00:08:10.508 "write_zeroes": true, 00:08:10.508 "zcopy": true, 00:08:10.508 "get_zone_info": false, 00:08:10.508 "zone_management": false, 00:08:10.508 "zone_append": false, 00:08:10.508 "compare": false, 00:08:10.508 "compare_and_write": false, 00:08:10.508 "abort": true, 00:08:10.508 "seek_hole": false, 00:08:10.508 "seek_data": false, 00:08:10.508 "copy": true, 00:08:10.508 "nvme_iov_md": false 00:08:10.508 }, 00:08:10.508 "memory_domains": [ 00:08:10.508 { 00:08:10.508 "dma_device_id": "system", 00:08:10.508 "dma_device_type": 1 00:08:10.508 }, 00:08:10.508 { 00:08:10.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.508 "dma_device_type": 2 00:08:10.508 } 00:08:10.508 ], 00:08:10.508 "driver_specific": {} 00:08:10.508 } 00:08:10.508 ] 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.508 "name": "Existed_Raid", 00:08:10.508 "uuid": "b075e3a9-dbf3-4508-ac04-8a07c6361607", 00:08:10.508 "strip_size_kb": 0, 00:08:10.508 "state": "online", 00:08:10.508 "raid_level": "raid1", 00:08:10.508 "superblock": false, 00:08:10.508 "num_base_bdevs": 2, 00:08:10.508 "num_base_bdevs_discovered": 2, 00:08:10.508 "num_base_bdevs_operational": 2, 00:08:10.508 "base_bdevs_list": [ 00:08:10.508 { 00:08:10.508 "name": "BaseBdev1", 00:08:10.508 "uuid": "a9e86903-0b9b-4baa-8830-766d83aec1ff", 00:08:10.508 "is_configured": true, 00:08:10.508 "data_offset": 0, 00:08:10.508 "data_size": 65536 00:08:10.508 }, 00:08:10.508 { 00:08:10.508 "name": "BaseBdev2", 00:08:10.508 "uuid": "a5343d38-e456-468c-9a7b-9b0ab64e2096", 00:08:10.508 "is_configured": true, 00:08:10.508 "data_offset": 0, 00:08:10.508 "data_size": 65536 00:08:10.508 } 00:08:10.508 ] 00:08:10.508 }' 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.508 10:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.075 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.076 [2024-11-19 10:02:25.096750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.076 "name": "Existed_Raid", 00:08:11.076 "aliases": [ 00:08:11.076 "b075e3a9-dbf3-4508-ac04-8a07c6361607" 00:08:11.076 ], 00:08:11.076 "product_name": "Raid Volume", 00:08:11.076 "block_size": 512, 00:08:11.076 "num_blocks": 65536, 00:08:11.076 "uuid": "b075e3a9-dbf3-4508-ac04-8a07c6361607", 00:08:11.076 "assigned_rate_limits": { 00:08:11.076 "rw_ios_per_sec": 0, 00:08:11.076 "rw_mbytes_per_sec": 0, 00:08:11.076 "r_mbytes_per_sec": 0, 00:08:11.076 "w_mbytes_per_sec": 0 00:08:11.076 }, 00:08:11.076 "claimed": false, 00:08:11.076 "zoned": false, 00:08:11.076 "supported_io_types": { 00:08:11.076 "read": true, 00:08:11.076 "write": true, 00:08:11.076 "unmap": false, 00:08:11.076 "flush": false, 00:08:11.076 "reset": true, 00:08:11.076 "nvme_admin": false, 00:08:11.076 "nvme_io": false, 00:08:11.076 "nvme_io_md": false, 00:08:11.076 "write_zeroes": true, 00:08:11.076 "zcopy": false, 00:08:11.076 "get_zone_info": false, 00:08:11.076 "zone_management": false, 00:08:11.076 "zone_append": false, 00:08:11.076 "compare": false, 00:08:11.076 "compare_and_write": false, 00:08:11.076 "abort": false, 00:08:11.076 "seek_hole": false, 00:08:11.076 "seek_data": false, 00:08:11.076 "copy": false, 00:08:11.076 "nvme_iov_md": false 00:08:11.076 }, 00:08:11.076 "memory_domains": [ 00:08:11.076 { 00:08:11.076 "dma_device_id": "system", 00:08:11.076 "dma_device_type": 1 00:08:11.076 }, 00:08:11.076 { 00:08:11.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.076 "dma_device_type": 2 00:08:11.076 }, 00:08:11.076 { 00:08:11.076 "dma_device_id": "system", 00:08:11.076 "dma_device_type": 1 00:08:11.076 }, 00:08:11.076 { 00:08:11.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.076 "dma_device_type": 2 00:08:11.076 } 00:08:11.076 ], 00:08:11.076 "driver_specific": { 00:08:11.076 "raid": { 00:08:11.076 "uuid": "b075e3a9-dbf3-4508-ac04-8a07c6361607", 00:08:11.076 "strip_size_kb": 0, 00:08:11.076 "state": "online", 00:08:11.076 "raid_level": "raid1", 00:08:11.076 "superblock": false, 00:08:11.076 "num_base_bdevs": 2, 00:08:11.076 "num_base_bdevs_discovered": 2, 00:08:11.076 "num_base_bdevs_operational": 2, 00:08:11.076 "base_bdevs_list": [ 00:08:11.076 { 00:08:11.076 "name": "BaseBdev1", 00:08:11.076 "uuid": "a9e86903-0b9b-4baa-8830-766d83aec1ff", 00:08:11.076 "is_configured": true, 00:08:11.076 "data_offset": 0, 00:08:11.076 "data_size": 65536 00:08:11.076 }, 00:08:11.076 { 00:08:11.076 "name": "BaseBdev2", 00:08:11.076 "uuid": "a5343d38-e456-468c-9a7b-9b0ab64e2096", 00:08:11.076 "is_configured": true, 00:08:11.076 "data_offset": 0, 00:08:11.076 "data_size": 65536 00:08:11.076 } 00:08:11.076 ] 00:08:11.076 } 00:08:11.076 } 00:08:11.076 }' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.076 BaseBdev2' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.076 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.335 [2024-11-19 10:02:25.364491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.335 "name": "Existed_Raid", 00:08:11.335 "uuid": "b075e3a9-dbf3-4508-ac04-8a07c6361607", 00:08:11.335 "strip_size_kb": 0, 00:08:11.335 "state": "online", 00:08:11.335 "raid_level": "raid1", 00:08:11.335 "superblock": false, 00:08:11.335 "num_base_bdevs": 2, 00:08:11.335 "num_base_bdevs_discovered": 1, 00:08:11.335 "num_base_bdevs_operational": 1, 00:08:11.335 "base_bdevs_list": [ 00:08:11.335 { 00:08:11.335 "name": null, 00:08:11.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.335 "is_configured": false, 00:08:11.335 "data_offset": 0, 00:08:11.335 "data_size": 65536 00:08:11.335 }, 00:08:11.335 { 00:08:11.335 "name": "BaseBdev2", 00:08:11.335 "uuid": "a5343d38-e456-468c-9a7b-9b0ab64e2096", 00:08:11.335 "is_configured": true, 00:08:11.335 "data_offset": 0, 00:08:11.335 "data_size": 65536 00:08:11.335 } 00:08:11.335 ] 00:08:11.335 }' 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.335 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.902 10:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.903 [2024-11-19 10:02:25.997554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.903 [2024-11-19 10:02:25.997731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.903 [2024-11-19 10:02:26.100998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.903 [2024-11-19 10:02:26.101346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.903 [2024-11-19 10:02:26.101400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.903 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62511 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62511 ']' 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62511 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62511 00:08:12.161 killing process with pid 62511 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62511' 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62511 00:08:12.161 [2024-11-19 10:02:26.193532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.161 10:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62511 00:08:12.161 [2024-11-19 10:02:26.209443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.096 10:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.096 00:08:13.096 real 0m5.661s 00:08:13.096 user 0m8.443s 00:08:13.096 sys 0m0.858s 00:08:13.096 10:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.096 10:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.096 ************************************ 00:08:13.096 END TEST raid_state_function_test 00:08:13.096 ************************************ 00:08:13.354 10:02:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:13.354 10:02:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:13.354 10:02:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.354 10:02:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.354 ************************************ 00:08:13.354 START TEST raid_state_function_test_sb 00:08:13.354 ************************************ 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.354 Process raid pid: 62771 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:13.354 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62771 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62771' 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62771 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62771 ']' 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.355 10:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.355 [2024-11-19 10:02:27.475510] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:13.355 [2024-11-19 10:02:27.476502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.613 [2024-11-19 10:02:27.660237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.872 [2024-11-19 10:02:27.851899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.131 [2024-11-19 10:02:28.132574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.131 [2024-11-19 10:02:28.132901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.390 [2024-11-19 10:02:28.557878] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.390 [2024-11-19 10:02:28.557982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.390 [2024-11-19 10:02:28.558004] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.390 [2024-11-19 10:02:28.558024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.390 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.650 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.650 "name": "Existed_Raid", 00:08:14.650 "uuid": "1dea4aef-8805-4ba8-b21c-8bdbaf89f89a", 00:08:14.650 "strip_size_kb": 0, 00:08:14.650 "state": "configuring", 00:08:14.650 "raid_level": "raid1", 00:08:14.650 "superblock": true, 00:08:14.650 "num_base_bdevs": 2, 00:08:14.650 "num_base_bdevs_discovered": 0, 00:08:14.650 "num_base_bdevs_operational": 2, 00:08:14.650 "base_bdevs_list": [ 00:08:14.650 { 00:08:14.650 "name": "BaseBdev1", 00:08:14.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.650 "is_configured": false, 00:08:14.650 "data_offset": 0, 00:08:14.650 "data_size": 0 00:08:14.650 }, 00:08:14.650 { 00:08:14.650 "name": "BaseBdev2", 00:08:14.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.650 "is_configured": false, 00:08:14.650 "data_offset": 0, 00:08:14.650 "data_size": 0 00:08:14.650 } 00:08:14.650 ] 00:08:14.650 }' 00:08:14.650 10:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.650 10:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.909 [2024-11-19 10:02:29.082040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.909 [2024-11-19 10:02:29.082116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.909 [2024-11-19 10:02:29.090007] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.909 [2024-11-19 10:02:29.090324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.909 [2024-11-19 10:02:29.090357] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.909 [2024-11-19 10:02:29.090383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.909 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.168 [2024-11-19 10:02:29.142074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.168 BaseBdev1 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.168 [ 00:08:15.168 { 00:08:15.168 "name": "BaseBdev1", 00:08:15.168 "aliases": [ 00:08:15.168 "a1e16c8c-4026-465f-9b95-8e622767e6d1" 00:08:15.168 ], 00:08:15.168 "product_name": "Malloc disk", 00:08:15.168 "block_size": 512, 00:08:15.168 "num_blocks": 65536, 00:08:15.168 "uuid": "a1e16c8c-4026-465f-9b95-8e622767e6d1", 00:08:15.168 "assigned_rate_limits": { 00:08:15.168 "rw_ios_per_sec": 0, 00:08:15.168 "rw_mbytes_per_sec": 0, 00:08:15.168 "r_mbytes_per_sec": 0, 00:08:15.168 "w_mbytes_per_sec": 0 00:08:15.168 }, 00:08:15.168 "claimed": true, 00:08:15.168 "claim_type": "exclusive_write", 00:08:15.168 "zoned": false, 00:08:15.168 "supported_io_types": { 00:08:15.168 "read": true, 00:08:15.168 "write": true, 00:08:15.168 "unmap": true, 00:08:15.168 "flush": true, 00:08:15.168 "reset": true, 00:08:15.168 "nvme_admin": false, 00:08:15.168 "nvme_io": false, 00:08:15.168 "nvme_io_md": false, 00:08:15.168 "write_zeroes": true, 00:08:15.168 "zcopy": true, 00:08:15.168 "get_zone_info": false, 00:08:15.168 "zone_management": false, 00:08:15.168 "zone_append": false, 00:08:15.168 "compare": false, 00:08:15.168 "compare_and_write": false, 00:08:15.168 "abort": true, 00:08:15.168 "seek_hole": false, 00:08:15.168 "seek_data": false, 00:08:15.168 "copy": true, 00:08:15.168 "nvme_iov_md": false 00:08:15.168 }, 00:08:15.168 "memory_domains": [ 00:08:15.168 { 00:08:15.168 "dma_device_id": "system", 00:08:15.168 "dma_device_type": 1 00:08:15.168 }, 00:08:15.168 { 00:08:15.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.168 "dma_device_type": 2 00:08:15.168 } 00:08:15.168 ], 00:08:15.168 "driver_specific": {} 00:08:15.168 } 00:08:15.168 ] 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.168 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.169 "name": "Existed_Raid", 00:08:15.169 "uuid": "af95a5d1-216f-4fca-8cc2-432d80f90390", 00:08:15.169 "strip_size_kb": 0, 00:08:15.169 "state": "configuring", 00:08:15.169 "raid_level": "raid1", 00:08:15.169 "superblock": true, 00:08:15.169 "num_base_bdevs": 2, 00:08:15.169 "num_base_bdevs_discovered": 1, 00:08:15.169 "num_base_bdevs_operational": 2, 00:08:15.169 "base_bdevs_list": [ 00:08:15.169 { 00:08:15.169 "name": "BaseBdev1", 00:08:15.169 "uuid": "a1e16c8c-4026-465f-9b95-8e622767e6d1", 00:08:15.169 "is_configured": true, 00:08:15.169 "data_offset": 2048, 00:08:15.169 "data_size": 63488 00:08:15.169 }, 00:08:15.169 { 00:08:15.169 "name": "BaseBdev2", 00:08:15.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.169 "is_configured": false, 00:08:15.169 "data_offset": 0, 00:08:15.169 "data_size": 0 00:08:15.169 } 00:08:15.169 ] 00:08:15.169 }' 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.169 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.736 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.736 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.736 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.736 [2024-11-19 10:02:29.686388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.736 [2024-11-19 10:02:29.686504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:15.736 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.736 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.736 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.737 [2024-11-19 10:02:29.694421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.737 [2024-11-19 10:02:29.696987] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.737 [2024-11-19 10:02:29.697056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.737 "name": "Existed_Raid", 00:08:15.737 "uuid": "bff7a553-b26c-4336-a7cc-ad701498f6e8", 00:08:15.737 "strip_size_kb": 0, 00:08:15.737 "state": "configuring", 00:08:15.737 "raid_level": "raid1", 00:08:15.737 "superblock": true, 00:08:15.737 "num_base_bdevs": 2, 00:08:15.737 "num_base_bdevs_discovered": 1, 00:08:15.737 "num_base_bdevs_operational": 2, 00:08:15.737 "base_bdevs_list": [ 00:08:15.737 { 00:08:15.737 "name": "BaseBdev1", 00:08:15.737 "uuid": "a1e16c8c-4026-465f-9b95-8e622767e6d1", 00:08:15.737 "is_configured": true, 00:08:15.737 "data_offset": 2048, 00:08:15.737 "data_size": 63488 00:08:15.737 }, 00:08:15.737 { 00:08:15.737 "name": "BaseBdev2", 00:08:15.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.737 "is_configured": false, 00:08:15.737 "data_offset": 0, 00:08:15.737 "data_size": 0 00:08:15.737 } 00:08:15.737 ] 00:08:15.737 }' 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.737 10:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.995 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.995 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.995 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.254 [2024-11-19 10:02:30.245668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.255 [2024-11-19 10:02:30.246438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.255 [2024-11-19 10:02:30.246486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:16.255 BaseBdev2 00:08:16.255 [2024-11-19 10:02:30.246910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.255 [2024-11-19 10:02:30.247122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.255 [2024-11-19 10:02:30.247145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:16.255 [2024-11-19 10:02:30.247328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.255 [ 00:08:16.255 { 00:08:16.255 "name": "BaseBdev2", 00:08:16.255 "aliases": [ 00:08:16.255 "fc27b4ed-fd4f-4677-ae93-d5bb09542e35" 00:08:16.255 ], 00:08:16.255 "product_name": "Malloc disk", 00:08:16.255 "block_size": 512, 00:08:16.255 "num_blocks": 65536, 00:08:16.255 "uuid": "fc27b4ed-fd4f-4677-ae93-d5bb09542e35", 00:08:16.255 "assigned_rate_limits": { 00:08:16.255 "rw_ios_per_sec": 0, 00:08:16.255 "rw_mbytes_per_sec": 0, 00:08:16.255 "r_mbytes_per_sec": 0, 00:08:16.255 "w_mbytes_per_sec": 0 00:08:16.255 }, 00:08:16.255 "claimed": true, 00:08:16.255 "claim_type": "exclusive_write", 00:08:16.255 "zoned": false, 00:08:16.255 "supported_io_types": { 00:08:16.255 "read": true, 00:08:16.255 "write": true, 00:08:16.255 "unmap": true, 00:08:16.255 "flush": true, 00:08:16.255 "reset": true, 00:08:16.255 "nvme_admin": false, 00:08:16.255 "nvme_io": false, 00:08:16.255 "nvme_io_md": false, 00:08:16.255 "write_zeroes": true, 00:08:16.255 "zcopy": true, 00:08:16.255 "get_zone_info": false, 00:08:16.255 "zone_management": false, 00:08:16.255 "zone_append": false, 00:08:16.255 "compare": false, 00:08:16.255 "compare_and_write": false, 00:08:16.255 "abort": true, 00:08:16.255 "seek_hole": false, 00:08:16.255 "seek_data": false, 00:08:16.255 "copy": true, 00:08:16.255 "nvme_iov_md": false 00:08:16.255 }, 00:08:16.255 "memory_domains": [ 00:08:16.255 { 00:08:16.255 "dma_device_id": "system", 00:08:16.255 "dma_device_type": 1 00:08:16.255 }, 00:08:16.255 { 00:08:16.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.255 "dma_device_type": 2 00:08:16.255 } 00:08:16.255 ], 00:08:16.255 "driver_specific": {} 00:08:16.255 } 00:08:16.255 ] 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.255 "name": "Existed_Raid", 00:08:16.255 "uuid": "bff7a553-b26c-4336-a7cc-ad701498f6e8", 00:08:16.255 "strip_size_kb": 0, 00:08:16.255 "state": "online", 00:08:16.255 "raid_level": "raid1", 00:08:16.255 "superblock": true, 00:08:16.255 "num_base_bdevs": 2, 00:08:16.255 "num_base_bdevs_discovered": 2, 00:08:16.255 "num_base_bdevs_operational": 2, 00:08:16.255 "base_bdevs_list": [ 00:08:16.255 { 00:08:16.255 "name": "BaseBdev1", 00:08:16.255 "uuid": "a1e16c8c-4026-465f-9b95-8e622767e6d1", 00:08:16.255 "is_configured": true, 00:08:16.255 "data_offset": 2048, 00:08:16.255 "data_size": 63488 00:08:16.255 }, 00:08:16.255 { 00:08:16.255 "name": "BaseBdev2", 00:08:16.255 "uuid": "fc27b4ed-fd4f-4677-ae93-d5bb09542e35", 00:08:16.255 "is_configured": true, 00:08:16.255 "data_offset": 2048, 00:08:16.255 "data_size": 63488 00:08:16.255 } 00:08:16.255 ] 00:08:16.255 }' 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.255 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.824 [2024-11-19 10:02:30.802328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.824 "name": "Existed_Raid", 00:08:16.824 "aliases": [ 00:08:16.824 "bff7a553-b26c-4336-a7cc-ad701498f6e8" 00:08:16.824 ], 00:08:16.824 "product_name": "Raid Volume", 00:08:16.824 "block_size": 512, 00:08:16.824 "num_blocks": 63488, 00:08:16.824 "uuid": "bff7a553-b26c-4336-a7cc-ad701498f6e8", 00:08:16.824 "assigned_rate_limits": { 00:08:16.824 "rw_ios_per_sec": 0, 00:08:16.824 "rw_mbytes_per_sec": 0, 00:08:16.824 "r_mbytes_per_sec": 0, 00:08:16.824 "w_mbytes_per_sec": 0 00:08:16.824 }, 00:08:16.824 "claimed": false, 00:08:16.824 "zoned": false, 00:08:16.824 "supported_io_types": { 00:08:16.824 "read": true, 00:08:16.824 "write": true, 00:08:16.824 "unmap": false, 00:08:16.824 "flush": false, 00:08:16.824 "reset": true, 00:08:16.824 "nvme_admin": false, 00:08:16.824 "nvme_io": false, 00:08:16.824 "nvme_io_md": false, 00:08:16.824 "write_zeroes": true, 00:08:16.824 "zcopy": false, 00:08:16.824 "get_zone_info": false, 00:08:16.824 "zone_management": false, 00:08:16.824 "zone_append": false, 00:08:16.824 "compare": false, 00:08:16.824 "compare_and_write": false, 00:08:16.824 "abort": false, 00:08:16.824 "seek_hole": false, 00:08:16.824 "seek_data": false, 00:08:16.824 "copy": false, 00:08:16.824 "nvme_iov_md": false 00:08:16.824 }, 00:08:16.824 "memory_domains": [ 00:08:16.824 { 00:08:16.824 "dma_device_id": "system", 00:08:16.824 "dma_device_type": 1 00:08:16.824 }, 00:08:16.824 { 00:08:16.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.824 "dma_device_type": 2 00:08:16.824 }, 00:08:16.824 { 00:08:16.824 "dma_device_id": "system", 00:08:16.824 "dma_device_type": 1 00:08:16.824 }, 00:08:16.824 { 00:08:16.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.824 "dma_device_type": 2 00:08:16.824 } 00:08:16.824 ], 00:08:16.824 "driver_specific": { 00:08:16.824 "raid": { 00:08:16.824 "uuid": "bff7a553-b26c-4336-a7cc-ad701498f6e8", 00:08:16.824 "strip_size_kb": 0, 00:08:16.824 "state": "online", 00:08:16.824 "raid_level": "raid1", 00:08:16.824 "superblock": true, 00:08:16.824 "num_base_bdevs": 2, 00:08:16.824 "num_base_bdevs_discovered": 2, 00:08:16.824 "num_base_bdevs_operational": 2, 00:08:16.824 "base_bdevs_list": [ 00:08:16.824 { 00:08:16.824 "name": "BaseBdev1", 00:08:16.824 "uuid": "a1e16c8c-4026-465f-9b95-8e622767e6d1", 00:08:16.824 "is_configured": true, 00:08:16.824 "data_offset": 2048, 00:08:16.824 "data_size": 63488 00:08:16.824 }, 00:08:16.824 { 00:08:16.824 "name": "BaseBdev2", 00:08:16.824 "uuid": "fc27b4ed-fd4f-4677-ae93-d5bb09542e35", 00:08:16.824 "is_configured": true, 00:08:16.824 "data_offset": 2048, 00:08:16.824 "data_size": 63488 00:08:16.824 } 00:08:16.824 ] 00:08:16.824 } 00:08:16.824 } 00:08:16.824 }' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.824 BaseBdev2' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.824 10:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.824 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.824 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.824 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.824 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.824 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.824 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.824 [2024-11-19 10:02:31.038078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.083 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.083 "name": "Existed_Raid", 00:08:17.083 "uuid": "bff7a553-b26c-4336-a7cc-ad701498f6e8", 00:08:17.083 "strip_size_kb": 0, 00:08:17.083 "state": "online", 00:08:17.083 "raid_level": "raid1", 00:08:17.083 "superblock": true, 00:08:17.083 "num_base_bdevs": 2, 00:08:17.083 "num_base_bdevs_discovered": 1, 00:08:17.083 "num_base_bdevs_operational": 1, 00:08:17.084 "base_bdevs_list": [ 00:08:17.084 { 00:08:17.084 "name": null, 00:08:17.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.084 "is_configured": false, 00:08:17.084 "data_offset": 0, 00:08:17.084 "data_size": 63488 00:08:17.084 }, 00:08:17.084 { 00:08:17.084 "name": "BaseBdev2", 00:08:17.084 "uuid": "fc27b4ed-fd4f-4677-ae93-d5bb09542e35", 00:08:17.084 "is_configured": true, 00:08:17.084 "data_offset": 2048, 00:08:17.084 "data_size": 63488 00:08:17.084 } 00:08:17.084 ] 00:08:17.084 }' 00:08:17.084 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.084 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.652 [2024-11-19 10:02:31.690535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.652 [2024-11-19 10:02:31.690717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.652 [2024-11-19 10:02:31.779778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.652 [2024-11-19 10:02:31.779887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.652 [2024-11-19 10:02:31.779929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62771 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62771 ']' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62771 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62771 00:08:17.652 killing process with pid 62771 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62771' 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62771 00:08:17.652 [2024-11-19 10:02:31.874282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.652 10:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62771 00:08:17.911 [2024-11-19 10:02:31.890137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.847 ************************************ 00:08:18.847 END TEST raid_state_function_test_sb 00:08:18.847 ************************************ 00:08:18.847 10:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.847 00:08:18.847 real 0m5.533s 00:08:18.847 user 0m8.315s 00:08:18.847 sys 0m0.839s 00:08:18.847 10:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.847 10:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.847 10:02:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:18.847 10:02:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:18.847 10:02:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.847 10:02:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.847 ************************************ 00:08:18.847 START TEST raid_superblock_test 00:08:18.847 ************************************ 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63023 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63023 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63023 ']' 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.847 10:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.105 [2024-11-19 10:02:33.098859] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:19.105 [2024-11-19 10:02:33.099058] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63023 ] 00:08:19.105 [2024-11-19 10:02:33.279315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.364 [2024-11-19 10:02:33.465317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.622 [2024-11-19 10:02:33.697348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.622 [2024-11-19 10:02:33.697422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 malloc1 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 [2024-11-19 10:02:34.222900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.225 [2024-11-19 10:02:34.223001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.225 [2024-11-19 10:02:34.223039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.225 [2024-11-19 10:02:34.223057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.225 [2024-11-19 10:02:34.226099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.225 [2024-11-19 10:02:34.226146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.225 pt1 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 malloc2 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 [2024-11-19 10:02:34.282945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.225 [2024-11-19 10:02:34.283193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.225 [2024-11-19 10:02:34.283240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:20.225 [2024-11-19 10:02:34.283257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.225 [2024-11-19 10:02:34.286338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.225 [2024-11-19 10:02:34.286524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.225 pt2 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 [2024-11-19 10:02:34.291255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:20.225 [2024-11-19 10:02:34.293915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.225 [2024-11-19 10:02:34.294151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:20.225 [2024-11-19 10:02:34.294178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.225 [2024-11-19 10:02:34.294536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.225 [2024-11-19 10:02:34.294751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:20.225 [2024-11-19 10:02:34.294777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:20.225 [2024-11-19 10:02:34.295009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.225 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.226 "name": "raid_bdev1", 00:08:20.226 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:20.226 "strip_size_kb": 0, 00:08:20.226 "state": "online", 00:08:20.226 "raid_level": "raid1", 00:08:20.226 "superblock": true, 00:08:20.226 "num_base_bdevs": 2, 00:08:20.226 "num_base_bdevs_discovered": 2, 00:08:20.226 "num_base_bdevs_operational": 2, 00:08:20.226 "base_bdevs_list": [ 00:08:20.226 { 00:08:20.226 "name": "pt1", 00:08:20.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.226 "is_configured": true, 00:08:20.226 "data_offset": 2048, 00:08:20.226 "data_size": 63488 00:08:20.226 }, 00:08:20.226 { 00:08:20.226 "name": "pt2", 00:08:20.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.226 "is_configured": true, 00:08:20.226 "data_offset": 2048, 00:08:20.226 "data_size": 63488 00:08:20.226 } 00:08:20.226 ] 00:08:20.226 }' 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.226 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.792 [2024-11-19 10:02:34.855751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.792 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.792 "name": "raid_bdev1", 00:08:20.792 "aliases": [ 00:08:20.792 "847e88a2-51c7-4fd1-b83a-76d93e4d54ab" 00:08:20.792 ], 00:08:20.792 "product_name": "Raid Volume", 00:08:20.793 "block_size": 512, 00:08:20.793 "num_blocks": 63488, 00:08:20.793 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:20.793 "assigned_rate_limits": { 00:08:20.793 "rw_ios_per_sec": 0, 00:08:20.793 "rw_mbytes_per_sec": 0, 00:08:20.793 "r_mbytes_per_sec": 0, 00:08:20.793 "w_mbytes_per_sec": 0 00:08:20.793 }, 00:08:20.793 "claimed": false, 00:08:20.793 "zoned": false, 00:08:20.793 "supported_io_types": { 00:08:20.793 "read": true, 00:08:20.793 "write": true, 00:08:20.793 "unmap": false, 00:08:20.793 "flush": false, 00:08:20.793 "reset": true, 00:08:20.793 "nvme_admin": false, 00:08:20.793 "nvme_io": false, 00:08:20.793 "nvme_io_md": false, 00:08:20.793 "write_zeroes": true, 00:08:20.793 "zcopy": false, 00:08:20.793 "get_zone_info": false, 00:08:20.793 "zone_management": false, 00:08:20.793 "zone_append": false, 00:08:20.793 "compare": false, 00:08:20.793 "compare_and_write": false, 00:08:20.793 "abort": false, 00:08:20.793 "seek_hole": false, 00:08:20.793 "seek_data": false, 00:08:20.793 "copy": false, 00:08:20.793 "nvme_iov_md": false 00:08:20.793 }, 00:08:20.793 "memory_domains": [ 00:08:20.793 { 00:08:20.793 "dma_device_id": "system", 00:08:20.793 "dma_device_type": 1 00:08:20.793 }, 00:08:20.793 { 00:08:20.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.793 "dma_device_type": 2 00:08:20.793 }, 00:08:20.793 { 00:08:20.793 "dma_device_id": "system", 00:08:20.793 "dma_device_type": 1 00:08:20.793 }, 00:08:20.793 { 00:08:20.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.793 "dma_device_type": 2 00:08:20.793 } 00:08:20.793 ], 00:08:20.793 "driver_specific": { 00:08:20.793 "raid": { 00:08:20.793 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:20.793 "strip_size_kb": 0, 00:08:20.793 "state": "online", 00:08:20.793 "raid_level": "raid1", 00:08:20.793 "superblock": true, 00:08:20.793 "num_base_bdevs": 2, 00:08:20.793 "num_base_bdevs_discovered": 2, 00:08:20.793 "num_base_bdevs_operational": 2, 00:08:20.793 "base_bdevs_list": [ 00:08:20.793 { 00:08:20.793 "name": "pt1", 00:08:20.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.793 "is_configured": true, 00:08:20.793 "data_offset": 2048, 00:08:20.793 "data_size": 63488 00:08:20.793 }, 00:08:20.793 { 00:08:20.793 "name": "pt2", 00:08:20.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.793 "is_configured": true, 00:08:20.793 "data_offset": 2048, 00:08:20.793 "data_size": 63488 00:08:20.793 } 00:08:20.793 ] 00:08:20.793 } 00:08:20.793 } 00:08:20.793 }' 00:08:20.793 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.793 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.793 pt2' 00:08:20.793 10:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.052 [2024-11-19 10:02:35.135837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.052 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=847e88a2-51c7-4fd1-b83a-76d93e4d54ab 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 847e88a2-51c7-4fd1-b83a-76d93e4d54ab ']' 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.053 [2024-11-19 10:02:35.187418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.053 [2024-11-19 10:02:35.187588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.053 [2024-11-19 10:02:35.187748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.053 [2024-11-19 10:02:35.187863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.053 [2024-11-19 10:02:35.187891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:21.053 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.312 [2024-11-19 10:02:35.331532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:21.312 [2024-11-19 10:02:35.334459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:21.312 [2024-11-19 10:02:35.334689] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:21.312 [2024-11-19 10:02:35.334951] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:21.312 [2024-11-19 10:02:35.335188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.312 [2024-11-19 10:02:35.335241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:21.312 request: 00:08:21.312 { 00:08:21.312 "name": "raid_bdev1", 00:08:21.312 "raid_level": "raid1", 00:08:21.312 "base_bdevs": [ 00:08:21.312 "malloc1", 00:08:21.312 "malloc2" 00:08:21.312 ], 00:08:21.312 "superblock": false, 00:08:21.312 "method": "bdev_raid_create", 00:08:21.312 "req_id": 1 00:08:21.312 } 00:08:21.312 Got JSON-RPC error response 00:08:21.312 response: 00:08:21.312 { 00:08:21.312 "code": -17, 00:08:21.312 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:21.312 } 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.312 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.312 [2024-11-19 10:02:35.399697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.312 [2024-11-19 10:02:35.399947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.312 [2024-11-19 10:02:35.399990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:21.313 [2024-11-19 10:02:35.400011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.313 [2024-11-19 10:02:35.403225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.313 [2024-11-19 10:02:35.403280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.313 [2024-11-19 10:02:35.403415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:21.313 [2024-11-19 10:02:35.403503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.313 pt1 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.313 "name": "raid_bdev1", 00:08:21.313 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:21.313 "strip_size_kb": 0, 00:08:21.313 "state": "configuring", 00:08:21.313 "raid_level": "raid1", 00:08:21.313 "superblock": true, 00:08:21.313 "num_base_bdevs": 2, 00:08:21.313 "num_base_bdevs_discovered": 1, 00:08:21.313 "num_base_bdevs_operational": 2, 00:08:21.313 "base_bdevs_list": [ 00:08:21.313 { 00:08:21.313 "name": "pt1", 00:08:21.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.313 "is_configured": true, 00:08:21.313 "data_offset": 2048, 00:08:21.313 "data_size": 63488 00:08:21.313 }, 00:08:21.313 { 00:08:21.313 "name": null, 00:08:21.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.313 "is_configured": false, 00:08:21.313 "data_offset": 2048, 00:08:21.313 "data_size": 63488 00:08:21.313 } 00:08:21.313 ] 00:08:21.313 }' 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.313 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.880 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:21.880 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:21.880 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:21.880 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.880 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.880 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.881 [2024-11-19 10:02:35.931946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.881 [2024-11-19 10:02:35.932060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.881 [2024-11-19 10:02:35.932096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:21.881 [2024-11-19 10:02:35.932118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.881 [2024-11-19 10:02:35.932829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.881 [2024-11-19 10:02:35.932882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.881 [2024-11-19 10:02:35.933005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:21.881 [2024-11-19 10:02:35.933057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.881 [2024-11-19 10:02:35.933243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.881 [2024-11-19 10:02:35.933271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.881 [2024-11-19 10:02:35.933596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:21.881 [2024-11-19 10:02:35.933820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.881 [2024-11-19 10:02:35.933837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:21.881 [2024-11-19 10:02:35.934027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.881 pt2 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.881 "name": "raid_bdev1", 00:08:21.881 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:21.881 "strip_size_kb": 0, 00:08:21.881 "state": "online", 00:08:21.881 "raid_level": "raid1", 00:08:21.881 "superblock": true, 00:08:21.881 "num_base_bdevs": 2, 00:08:21.881 "num_base_bdevs_discovered": 2, 00:08:21.881 "num_base_bdevs_operational": 2, 00:08:21.881 "base_bdevs_list": [ 00:08:21.881 { 00:08:21.881 "name": "pt1", 00:08:21.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.881 "is_configured": true, 00:08:21.881 "data_offset": 2048, 00:08:21.881 "data_size": 63488 00:08:21.881 }, 00:08:21.881 { 00:08:21.881 "name": "pt2", 00:08:21.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.881 "is_configured": true, 00:08:21.881 "data_offset": 2048, 00:08:21.881 "data_size": 63488 00:08:21.881 } 00:08:21.881 ] 00:08:21.881 }' 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.881 10:02:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 [2024-11-19 10:02:36.488954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.449 "name": "raid_bdev1", 00:08:22.449 "aliases": [ 00:08:22.449 "847e88a2-51c7-4fd1-b83a-76d93e4d54ab" 00:08:22.449 ], 00:08:22.449 "product_name": "Raid Volume", 00:08:22.449 "block_size": 512, 00:08:22.449 "num_blocks": 63488, 00:08:22.449 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:22.449 "assigned_rate_limits": { 00:08:22.449 "rw_ios_per_sec": 0, 00:08:22.449 "rw_mbytes_per_sec": 0, 00:08:22.449 "r_mbytes_per_sec": 0, 00:08:22.449 "w_mbytes_per_sec": 0 00:08:22.449 }, 00:08:22.449 "claimed": false, 00:08:22.449 "zoned": false, 00:08:22.449 "supported_io_types": { 00:08:22.449 "read": true, 00:08:22.449 "write": true, 00:08:22.449 "unmap": false, 00:08:22.449 "flush": false, 00:08:22.449 "reset": true, 00:08:22.449 "nvme_admin": false, 00:08:22.449 "nvme_io": false, 00:08:22.449 "nvme_io_md": false, 00:08:22.449 "write_zeroes": true, 00:08:22.449 "zcopy": false, 00:08:22.449 "get_zone_info": false, 00:08:22.449 "zone_management": false, 00:08:22.449 "zone_append": false, 00:08:22.449 "compare": false, 00:08:22.449 "compare_and_write": false, 00:08:22.449 "abort": false, 00:08:22.449 "seek_hole": false, 00:08:22.449 "seek_data": false, 00:08:22.449 "copy": false, 00:08:22.449 "nvme_iov_md": false 00:08:22.449 }, 00:08:22.449 "memory_domains": [ 00:08:22.449 { 00:08:22.449 "dma_device_id": "system", 00:08:22.449 "dma_device_type": 1 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.449 "dma_device_type": 2 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "dma_device_id": "system", 00:08:22.449 "dma_device_type": 1 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.449 "dma_device_type": 2 00:08:22.449 } 00:08:22.449 ], 00:08:22.449 "driver_specific": { 00:08:22.449 "raid": { 00:08:22.449 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:22.449 "strip_size_kb": 0, 00:08:22.449 "state": "online", 00:08:22.449 "raid_level": "raid1", 00:08:22.449 "superblock": true, 00:08:22.449 "num_base_bdevs": 2, 00:08:22.449 "num_base_bdevs_discovered": 2, 00:08:22.449 "num_base_bdevs_operational": 2, 00:08:22.449 "base_bdevs_list": [ 00:08:22.449 { 00:08:22.449 "name": "pt1", 00:08:22.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.449 "is_configured": true, 00:08:22.449 "data_offset": 2048, 00:08:22.449 "data_size": 63488 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "name": "pt2", 00:08:22.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.449 "is_configured": true, 00:08:22.449 "data_offset": 2048, 00:08:22.449 "data_size": 63488 00:08:22.449 } 00:08:22.449 ] 00:08:22.449 } 00:08:22.449 } 00:08:22.449 }' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:22.449 pt2' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.449 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.708 [2024-11-19 10:02:36.752957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 847e88a2-51c7-4fd1-b83a-76d93e4d54ab '!=' 847e88a2-51c7-4fd1-b83a-76d93e4d54ab ']' 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.708 [2024-11-19 10:02:36.804741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.708 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.709 "name": "raid_bdev1", 00:08:22.709 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:22.709 "strip_size_kb": 0, 00:08:22.709 "state": "online", 00:08:22.709 "raid_level": "raid1", 00:08:22.709 "superblock": true, 00:08:22.709 "num_base_bdevs": 2, 00:08:22.709 "num_base_bdevs_discovered": 1, 00:08:22.709 "num_base_bdevs_operational": 1, 00:08:22.709 "base_bdevs_list": [ 00:08:22.709 { 00:08:22.709 "name": null, 00:08:22.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.709 "is_configured": false, 00:08:22.709 "data_offset": 0, 00:08:22.709 "data_size": 63488 00:08:22.709 }, 00:08:22.709 { 00:08:22.709 "name": "pt2", 00:08:22.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.709 "is_configured": true, 00:08:22.709 "data_offset": 2048, 00:08:22.709 "data_size": 63488 00:08:22.709 } 00:08:22.709 ] 00:08:22.709 }' 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.709 10:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.276 [2024-11-19 10:02:37.320803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.276 [2024-11-19 10:02:37.320852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.276 [2024-11-19 10:02:37.320994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.276 [2024-11-19 10:02:37.321072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.276 [2024-11-19 10:02:37.321094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.276 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.276 [2024-11-19 10:02:37.400798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.276 [2024-11-19 10:02:37.400898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.276 [2024-11-19 10:02:37.400932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:23.276 [2024-11-19 10:02:37.400952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.276 [2024-11-19 10:02:37.404157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.276 [2024-11-19 10:02:37.404348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.276 [2024-11-19 10:02:37.404496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:23.276 [2024-11-19 10:02:37.404567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.276 [2024-11-19 10:02:37.404707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:23.276 [2024-11-19 10:02:37.404731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.276 [2024-11-19 10:02:37.405050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:23.276 [2024-11-19 10:02:37.405262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:23.277 [2024-11-19 10:02:37.405278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:23.277 [2024-11-19 10:02:37.405516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.277 pt2 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.277 "name": "raid_bdev1", 00:08:23.277 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:23.277 "strip_size_kb": 0, 00:08:23.277 "state": "online", 00:08:23.277 "raid_level": "raid1", 00:08:23.277 "superblock": true, 00:08:23.277 "num_base_bdevs": 2, 00:08:23.277 "num_base_bdevs_discovered": 1, 00:08:23.277 "num_base_bdevs_operational": 1, 00:08:23.277 "base_bdevs_list": [ 00:08:23.277 { 00:08:23.277 "name": null, 00:08:23.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.277 "is_configured": false, 00:08:23.277 "data_offset": 2048, 00:08:23.277 "data_size": 63488 00:08:23.277 }, 00:08:23.277 { 00:08:23.277 "name": "pt2", 00:08:23.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.277 "is_configured": true, 00:08:23.277 "data_offset": 2048, 00:08:23.277 "data_size": 63488 00:08:23.277 } 00:08:23.277 ] 00:08:23.277 }' 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.277 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 [2024-11-19 10:02:37.948979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.845 [2024-11-19 10:02:37.949166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.845 [2024-11-19 10:02:37.949301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.845 [2024-11-19 10:02:37.949386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.845 [2024-11-19 10:02:37.949403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 10:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 [2024-11-19 10:02:38.017042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.845 [2024-11-19 10:02:38.017151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.845 [2024-11-19 10:02:38.017187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:23.845 [2024-11-19 10:02:38.017203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.845 [2024-11-19 10:02:38.020565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.845 [2024-11-19 10:02:38.020610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.845 [2024-11-19 10:02:38.020760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:23.845 [2024-11-19 10:02:38.020839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.845 [2024-11-19 10:02:38.021025] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:23.845 [2024-11-19 10:02:38.021054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.845 [2024-11-19 10:02:38.021082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:23.845 [2024-11-19 10:02:38.021156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.845 [2024-11-19 10:02:38.021354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:23.845 [2024-11-19 10:02:38.021372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.845 pt1 00:08:23.845 [2024-11-19 10:02:38.021730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:23.845 [2024-11-19 10:02:38.021963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:23.845 [2024-11-19 10:02:38.021993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 [2024-11-19 10:02:38.022194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.104 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.104 "name": "raid_bdev1", 00:08:24.104 "uuid": "847e88a2-51c7-4fd1-b83a-76d93e4d54ab", 00:08:24.104 "strip_size_kb": 0, 00:08:24.104 "state": "online", 00:08:24.104 "raid_level": "raid1", 00:08:24.104 "superblock": true, 00:08:24.104 "num_base_bdevs": 2, 00:08:24.104 "num_base_bdevs_discovered": 1, 00:08:24.104 "num_base_bdevs_operational": 1, 00:08:24.104 "base_bdevs_list": [ 00:08:24.104 { 00:08:24.104 "name": null, 00:08:24.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.104 "is_configured": false, 00:08:24.104 "data_offset": 2048, 00:08:24.104 "data_size": 63488 00:08:24.104 }, 00:08:24.104 { 00:08:24.104 "name": "pt2", 00:08:24.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.104 "is_configured": true, 00:08:24.104 "data_offset": 2048, 00:08:24.104 "data_size": 63488 00:08:24.104 } 00:08:24.104 ] 00:08:24.104 }' 00:08:24.104 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.104 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:24.622 [2024-11-19 10:02:38.597591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 847e88a2-51c7-4fd1-b83a-76d93e4d54ab '!=' 847e88a2-51c7-4fd1-b83a-76d93e4d54ab ']' 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63023 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63023 ']' 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63023 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63023 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.622 killing process with pid 63023 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63023' 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63023 00:08:24.622 10:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63023 00:08:24.622 [2024-11-19 10:02:38.679425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.622 [2024-11-19 10:02:38.679579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.622 [2024-11-19 10:02:38.679661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.622 [2024-11-19 10:02:38.679686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:24.880 [2024-11-19 10:02:38.881946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.817 10:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:25.817 00:08:25.817 real 0m7.049s 00:08:25.817 user 0m11.029s 00:08:25.817 sys 0m1.119s 00:08:25.817 ************************************ 00:08:25.817 END TEST raid_superblock_test 00:08:25.817 ************************************ 00:08:25.817 10:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.817 10:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.075 10:02:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:26.075 10:02:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.075 10:02:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.075 10:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.075 ************************************ 00:08:26.075 START TEST raid_read_error_test 00:08:26.075 ************************************ 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.24ZrQKxaSW 00:08:26.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63364 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63364 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63364 ']' 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.076 10:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.076 [2024-11-19 10:02:40.187436] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:26.076 [2024-11-19 10:02:40.187637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63364 ] 00:08:26.335 [2024-11-19 10:02:40.375417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.335 [2024-11-19 10:02:40.522914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.593 [2024-11-19 10:02:40.750520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.593 [2024-11-19 10:02:40.750609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 BaseBdev1_malloc 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 true 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.160 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 [2024-11-19 10:02:41.253964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:27.161 [2024-11-19 10:02:41.254062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.161 [2024-11-19 10:02:41.254100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:27.161 [2024-11-19 10:02:41.254121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.161 [2024-11-19 10:02:41.257563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.161 [2024-11-19 10:02:41.257618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:27.161 BaseBdev1 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 BaseBdev2_malloc 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 true 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 [2024-11-19 10:02:41.323255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:27.161 [2024-11-19 10:02:41.323344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.161 [2024-11-19 10:02:41.323373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:27.161 [2024-11-19 10:02:41.323397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.161 [2024-11-19 10:02:41.326751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.161 [2024-11-19 10:02:41.327015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:27.161 BaseBdev2 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 [2024-11-19 10:02:41.335487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.161 [2024-11-19 10:02:41.338289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.161 [2024-11-19 10:02:41.338768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.161 [2024-11-19 10:02:41.338826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.161 [2024-11-19 10:02:41.339212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.161 [2024-11-19 10:02:41.339479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.161 [2024-11-19 10:02:41.339498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.161 [2024-11-19 10:02:41.339802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.419 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.419 "name": "raid_bdev1", 00:08:27.419 "uuid": "8b2f8944-ceda-467a-a848-33a0c7292955", 00:08:27.419 "strip_size_kb": 0, 00:08:27.419 "state": "online", 00:08:27.419 "raid_level": "raid1", 00:08:27.419 "superblock": true, 00:08:27.419 "num_base_bdevs": 2, 00:08:27.419 "num_base_bdevs_discovered": 2, 00:08:27.419 "num_base_bdevs_operational": 2, 00:08:27.419 "base_bdevs_list": [ 00:08:27.419 { 00:08:27.419 "name": "BaseBdev1", 00:08:27.419 "uuid": "2365a69d-bd0f-5907-8afe-f6da1196201d", 00:08:27.419 "is_configured": true, 00:08:27.419 "data_offset": 2048, 00:08:27.419 "data_size": 63488 00:08:27.419 }, 00:08:27.419 { 00:08:27.419 "name": "BaseBdev2", 00:08:27.419 "uuid": "8c70f891-dd8c-502d-9935-33aea3b4ae17", 00:08:27.419 "is_configured": true, 00:08:27.419 "data_offset": 2048, 00:08:27.419 "data_size": 63488 00:08:27.419 } 00:08:27.419 ] 00:08:27.419 }' 00:08:27.419 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.419 10:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.678 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:27.678 10:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.937 [2024-11-19 10:02:41.969416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.874 "name": "raid_bdev1", 00:08:28.874 "uuid": "8b2f8944-ceda-467a-a848-33a0c7292955", 00:08:28.874 "strip_size_kb": 0, 00:08:28.874 "state": "online", 00:08:28.874 "raid_level": "raid1", 00:08:28.874 "superblock": true, 00:08:28.874 "num_base_bdevs": 2, 00:08:28.874 "num_base_bdevs_discovered": 2, 00:08:28.874 "num_base_bdevs_operational": 2, 00:08:28.874 "base_bdevs_list": [ 00:08:28.874 { 00:08:28.874 "name": "BaseBdev1", 00:08:28.874 "uuid": "2365a69d-bd0f-5907-8afe-f6da1196201d", 00:08:28.874 "is_configured": true, 00:08:28.874 "data_offset": 2048, 00:08:28.874 "data_size": 63488 00:08:28.874 }, 00:08:28.874 { 00:08:28.874 "name": "BaseBdev2", 00:08:28.874 "uuid": "8c70f891-dd8c-502d-9935-33aea3b4ae17", 00:08:28.874 "is_configured": true, 00:08:28.874 "data_offset": 2048, 00:08:28.874 "data_size": 63488 00:08:28.874 } 00:08:28.874 ] 00:08:28.874 }' 00:08:28.874 10:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.875 10:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.442 [2024-11-19 10:02:43.376579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.442 [2024-11-19 10:02:43.376631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.442 [2024-11-19 10:02:43.380023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.442 [2024-11-19 10:02:43.380102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.442 [2024-11-19 10:02:43.380236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.442 [2024-11-19 10:02:43.380260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.442 { 00:08:29.442 "results": [ 00:08:29.442 { 00:08:29.442 "job": "raid_bdev1", 00:08:29.442 "core_mask": "0x1", 00:08:29.442 "workload": "randrw", 00:08:29.442 "percentage": 50, 00:08:29.442 "status": "finished", 00:08:29.442 "queue_depth": 1, 00:08:29.442 "io_size": 131072, 00:08:29.442 "runtime": 1.404289, 00:08:29.442 "iops": 10267.829485241286, 00:08:29.442 "mibps": 1283.4786856551607, 00:08:29.442 "io_failed": 0, 00:08:29.442 "io_timeout": 0, 00:08:29.442 "avg_latency_us": 93.19143907344476, 00:08:29.442 "min_latency_us": 45.38181818181818, 00:08:29.442 "max_latency_us": 2144.8145454545456 00:08:29.442 } 00:08:29.442 ], 00:08:29.442 "core_count": 1 00:08:29.442 } 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63364 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63364 ']' 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63364 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63364 00:08:29.442 killing process with pid 63364 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63364' 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63364 00:08:29.442 10:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63364 00:08:29.442 [2024-11-19 10:02:43.420537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.442 [2024-11-19 10:02:43.555276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.820 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.24ZrQKxaSW 00:08:30.820 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:30.820 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:30.821 ************************************ 00:08:30.821 END TEST raid_read_error_test 00:08:30.821 ************************************ 00:08:30.821 00:08:30.821 real 0m4.672s 00:08:30.821 user 0m5.752s 00:08:30.821 sys 0m0.636s 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.821 10:02:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.821 10:02:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:30.821 10:02:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.821 10:02:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.821 10:02:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.821 ************************************ 00:08:30.821 START TEST raid_write_error_test 00:08:30.821 ************************************ 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wrQ5g76rTX 00:08:30.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63510 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63510 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63510 ']' 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.821 10:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.821 [2024-11-19 10:02:44.916271] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:30.821 [2024-11-19 10:02:44.916453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63510 ] 00:08:31.080 [2024-11-19 10:02:45.115578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.080 [2024-11-19 10:02:45.263320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.339 [2024-11-19 10:02:45.489496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.339 [2024-11-19 10:02:45.489570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.908 10:02:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.908 10:02:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.908 10:02:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.908 10:02:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.908 10:02:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 BaseBdev1_malloc 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 true 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 [2024-11-19 10:02:46.035563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.908 [2024-11-19 10:02:46.035648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.908 [2024-11-19 10:02:46.035683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.908 [2024-11-19 10:02:46.035703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.908 [2024-11-19 10:02:46.038887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.908 [2024-11-19 10:02:46.038944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.908 BaseBdev1 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 BaseBdev2_malloc 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 true 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 [2024-11-19 10:02:46.095799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.908 [2024-11-19 10:02:46.095879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.908 [2024-11-19 10:02:46.095910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.908 [2024-11-19 10:02:46.095928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.908 [2024-11-19 10:02:46.098981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.908 [2024-11-19 10:02:46.099036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.908 BaseBdev2 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 [2024-11-19 10:02:46.103935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.908 [2024-11-19 10:02:46.106547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.908 [2024-11-19 10:02:46.106858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.908 [2024-11-19 10:02:46.106883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:31.908 [2024-11-19 10:02:46.107223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.908 [2024-11-19 10:02:46.107485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.908 [2024-11-19 10:02:46.107503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:31.908 [2024-11-19 10:02:46.107715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.908 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.167 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.167 "name": "raid_bdev1", 00:08:32.167 "uuid": "dcc10d87-74ce-4cd5-a16e-77a021f05f4a", 00:08:32.167 "strip_size_kb": 0, 00:08:32.167 "state": "online", 00:08:32.167 "raid_level": "raid1", 00:08:32.167 "superblock": true, 00:08:32.167 "num_base_bdevs": 2, 00:08:32.167 "num_base_bdevs_discovered": 2, 00:08:32.167 "num_base_bdevs_operational": 2, 00:08:32.167 "base_bdevs_list": [ 00:08:32.167 { 00:08:32.167 "name": "BaseBdev1", 00:08:32.167 "uuid": "5c52ca85-bcbd-5ba3-8285-4c28c6f19824", 00:08:32.167 "is_configured": true, 00:08:32.167 "data_offset": 2048, 00:08:32.167 "data_size": 63488 00:08:32.167 }, 00:08:32.167 { 00:08:32.167 "name": "BaseBdev2", 00:08:32.167 "uuid": "a2c77378-312b-5f17-b995-179ae652c55b", 00:08:32.167 "is_configured": true, 00:08:32.167 "data_offset": 2048, 00:08:32.167 "data_size": 63488 00:08:32.167 } 00:08:32.167 ] 00:08:32.167 }' 00:08:32.167 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.167 10:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.427 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.427 10:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.686 [2024-11-19 10:02:46.758127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.672 [2024-11-19 10:02:47.637574] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:33.672 [2024-11-19 10:02:47.637662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.672 [2024-11-19 10:02:47.637919] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.672 "name": "raid_bdev1", 00:08:33.672 "uuid": "dcc10d87-74ce-4cd5-a16e-77a021f05f4a", 00:08:33.672 "strip_size_kb": 0, 00:08:33.672 "state": "online", 00:08:33.672 "raid_level": "raid1", 00:08:33.672 "superblock": true, 00:08:33.672 "num_base_bdevs": 2, 00:08:33.672 "num_base_bdevs_discovered": 1, 00:08:33.672 "num_base_bdevs_operational": 1, 00:08:33.672 "base_bdevs_list": [ 00:08:33.672 { 00:08:33.672 "name": null, 00:08:33.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.672 "is_configured": false, 00:08:33.672 "data_offset": 0, 00:08:33.672 "data_size": 63488 00:08:33.672 }, 00:08:33.672 { 00:08:33.672 "name": "BaseBdev2", 00:08:33.672 "uuid": "a2c77378-312b-5f17-b995-179ae652c55b", 00:08:33.672 "is_configured": true, 00:08:33.672 "data_offset": 2048, 00:08:33.672 "data_size": 63488 00:08:33.672 } 00:08:33.672 ] 00:08:33.672 }' 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.672 10:02:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.241 [2024-11-19 10:02:48.170870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.241 [2024-11-19 10:02:48.171050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.241 [2024-11-19 10:02:48.174536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.241 [2024-11-19 10:02:48.174591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.241 [2024-11-19 10:02:48.174681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.241 [2024-11-19 10:02:48.174701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.241 { 00:08:34.241 "results": [ 00:08:34.241 { 00:08:34.241 "job": "raid_bdev1", 00:08:34.241 "core_mask": "0x1", 00:08:34.241 "workload": "randrw", 00:08:34.241 "percentage": 50, 00:08:34.241 "status": "finished", 00:08:34.241 "queue_depth": 1, 00:08:34.241 "io_size": 131072, 00:08:34.241 "runtime": 1.410054, 00:08:34.241 "iops": 12108.756118559999, 00:08:34.241 "mibps": 1513.5945148199999, 00:08:34.241 "io_failed": 0, 00:08:34.241 "io_timeout": 0, 00:08:34.241 "avg_latency_us": 78.39773392824817, 00:08:34.241 "min_latency_us": 43.52, 00:08:34.241 "max_latency_us": 1832.0290909090909 00:08:34.241 } 00:08:34.241 ], 00:08:34.241 "core_count": 1 00:08:34.241 } 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63510 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63510 ']' 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63510 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63510 00:08:34.241 killing process with pid 63510 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63510' 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63510 00:08:34.241 10:02:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63510 00:08:34.241 [2024-11-19 10:02:48.211940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.241 [2024-11-19 10:02:48.345742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wrQ5g76rTX 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:35.625 00:08:35.625 real 0m4.752s 00:08:35.625 user 0m5.882s 00:08:35.625 sys 0m0.682s 00:08:35.625 ************************************ 00:08:35.625 END TEST raid_write_error_test 00:08:35.625 ************************************ 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.625 10:02:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.625 10:02:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:35.625 10:02:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:35.625 10:02:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:35.625 10:02:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.625 10:02:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.625 10:02:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.625 ************************************ 00:08:35.625 START TEST raid_state_function_test 00:08:35.625 ************************************ 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:35.625 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63659 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.626 Process raid pid: 63659 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63659' 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63659 00:08:35.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63659 ']' 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.626 10:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.626 [2024-11-19 10:02:49.719799] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:35.626 [2024-11-19 10:02:49.720312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.884 [2024-11-19 10:02:49.903723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.884 [2024-11-19 10:02:50.053307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.143 [2024-11-19 10:02:50.285328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.143 [2024-11-19 10:02:50.285416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.711 [2024-11-19 10:02:50.716524] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.711 [2024-11-19 10:02:50.716600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.711 [2024-11-19 10:02:50.716620] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.711 [2024-11-19 10:02:50.716638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.711 [2024-11-19 10:02:50.716649] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.711 [2024-11-19 10:02:50.716664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.711 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.712 "name": "Existed_Raid", 00:08:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.712 "strip_size_kb": 64, 00:08:36.712 "state": "configuring", 00:08:36.712 "raid_level": "raid0", 00:08:36.712 "superblock": false, 00:08:36.712 "num_base_bdevs": 3, 00:08:36.712 "num_base_bdevs_discovered": 0, 00:08:36.712 "num_base_bdevs_operational": 3, 00:08:36.712 "base_bdevs_list": [ 00:08:36.712 { 00:08:36.712 "name": "BaseBdev1", 00:08:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.712 "is_configured": false, 00:08:36.712 "data_offset": 0, 00:08:36.712 "data_size": 0 00:08:36.712 }, 00:08:36.712 { 00:08:36.712 "name": "BaseBdev2", 00:08:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.712 "is_configured": false, 00:08:36.712 "data_offset": 0, 00:08:36.712 "data_size": 0 00:08:36.712 }, 00:08:36.712 { 00:08:36.712 "name": "BaseBdev3", 00:08:36.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.712 "is_configured": false, 00:08:36.712 "data_offset": 0, 00:08:36.712 "data_size": 0 00:08:36.712 } 00:08:36.712 ] 00:08:36.712 }' 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.712 10:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.284 [2024-11-19 10:02:51.228641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.284 [2024-11-19 10:02:51.228705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.284 [2024-11-19 10:02:51.236593] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.284 [2024-11-19 10:02:51.236681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.284 [2024-11-19 10:02:51.236696] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.284 [2024-11-19 10:02:51.236713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.284 [2024-11-19 10:02:51.236722] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.284 [2024-11-19 10:02:51.236737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.284 [2024-11-19 10:02:51.287002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.284 BaseBdev1 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.284 [ 00:08:37.284 { 00:08:37.284 "name": "BaseBdev1", 00:08:37.284 "aliases": [ 00:08:37.284 "7e9dfb2f-4404-4b25-a2dd-60138f15385b" 00:08:37.284 ], 00:08:37.284 "product_name": "Malloc disk", 00:08:37.284 "block_size": 512, 00:08:37.284 "num_blocks": 65536, 00:08:37.284 "uuid": "7e9dfb2f-4404-4b25-a2dd-60138f15385b", 00:08:37.284 "assigned_rate_limits": { 00:08:37.284 "rw_ios_per_sec": 0, 00:08:37.284 "rw_mbytes_per_sec": 0, 00:08:37.284 "r_mbytes_per_sec": 0, 00:08:37.284 "w_mbytes_per_sec": 0 00:08:37.284 }, 00:08:37.284 "claimed": true, 00:08:37.284 "claim_type": "exclusive_write", 00:08:37.284 "zoned": false, 00:08:37.284 "supported_io_types": { 00:08:37.284 "read": true, 00:08:37.284 "write": true, 00:08:37.284 "unmap": true, 00:08:37.284 "flush": true, 00:08:37.284 "reset": true, 00:08:37.284 "nvme_admin": false, 00:08:37.284 "nvme_io": false, 00:08:37.284 "nvme_io_md": false, 00:08:37.284 "write_zeroes": true, 00:08:37.284 "zcopy": true, 00:08:37.284 "get_zone_info": false, 00:08:37.284 "zone_management": false, 00:08:37.284 "zone_append": false, 00:08:37.284 "compare": false, 00:08:37.284 "compare_and_write": false, 00:08:37.284 "abort": true, 00:08:37.284 "seek_hole": false, 00:08:37.284 "seek_data": false, 00:08:37.284 "copy": true, 00:08:37.284 "nvme_iov_md": false 00:08:37.284 }, 00:08:37.284 "memory_domains": [ 00:08:37.284 { 00:08:37.284 "dma_device_id": "system", 00:08:37.284 "dma_device_type": 1 00:08:37.284 }, 00:08:37.284 { 00:08:37.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.284 "dma_device_type": 2 00:08:37.284 } 00:08:37.284 ], 00:08:37.284 "driver_specific": {} 00:08:37.284 } 00:08:37.284 ] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.284 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.285 "name": "Existed_Raid", 00:08:37.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.285 "strip_size_kb": 64, 00:08:37.285 "state": "configuring", 00:08:37.285 "raid_level": "raid0", 00:08:37.285 "superblock": false, 00:08:37.285 "num_base_bdevs": 3, 00:08:37.285 "num_base_bdevs_discovered": 1, 00:08:37.285 "num_base_bdevs_operational": 3, 00:08:37.285 "base_bdevs_list": [ 00:08:37.285 { 00:08:37.285 "name": "BaseBdev1", 00:08:37.285 "uuid": "7e9dfb2f-4404-4b25-a2dd-60138f15385b", 00:08:37.285 "is_configured": true, 00:08:37.285 "data_offset": 0, 00:08:37.285 "data_size": 65536 00:08:37.285 }, 00:08:37.285 { 00:08:37.285 "name": "BaseBdev2", 00:08:37.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.285 "is_configured": false, 00:08:37.285 "data_offset": 0, 00:08:37.285 "data_size": 0 00:08:37.285 }, 00:08:37.285 { 00:08:37.285 "name": "BaseBdev3", 00:08:37.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.285 "is_configured": false, 00:08:37.285 "data_offset": 0, 00:08:37.285 "data_size": 0 00:08:37.285 } 00:08:37.285 ] 00:08:37.285 }' 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.285 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.853 [2024-11-19 10:02:51.827226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.853 [2024-11-19 10:02:51.827302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.853 [2024-11-19 10:02:51.839288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.853 [2024-11-19 10:02:51.842144] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.853 [2024-11-19 10:02:51.842334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.853 [2024-11-19 10:02:51.842457] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.853 [2024-11-19 10:02:51.842591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.853 "name": "Existed_Raid", 00:08:37.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.853 "strip_size_kb": 64, 00:08:37.853 "state": "configuring", 00:08:37.853 "raid_level": "raid0", 00:08:37.853 "superblock": false, 00:08:37.853 "num_base_bdevs": 3, 00:08:37.853 "num_base_bdevs_discovered": 1, 00:08:37.853 "num_base_bdevs_operational": 3, 00:08:37.853 "base_bdevs_list": [ 00:08:37.853 { 00:08:37.853 "name": "BaseBdev1", 00:08:37.853 "uuid": "7e9dfb2f-4404-4b25-a2dd-60138f15385b", 00:08:37.853 "is_configured": true, 00:08:37.853 "data_offset": 0, 00:08:37.853 "data_size": 65536 00:08:37.853 }, 00:08:37.853 { 00:08:37.853 "name": "BaseBdev2", 00:08:37.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.853 "is_configured": false, 00:08:37.853 "data_offset": 0, 00:08:37.853 "data_size": 0 00:08:37.853 }, 00:08:37.853 { 00:08:37.853 "name": "BaseBdev3", 00:08:37.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.853 "is_configured": false, 00:08:37.853 "data_offset": 0, 00:08:37.853 "data_size": 0 00:08:37.853 } 00:08:37.853 ] 00:08:37.853 }' 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.853 10:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 [2024-11-19 10:02:52.401772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.422 BaseBdev2 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 [ 00:08:38.422 { 00:08:38.422 "name": "BaseBdev2", 00:08:38.422 "aliases": [ 00:08:38.422 "caa297f9-8231-4ec9-b157-0f4982aaa6fe" 00:08:38.422 ], 00:08:38.422 "product_name": "Malloc disk", 00:08:38.422 "block_size": 512, 00:08:38.422 "num_blocks": 65536, 00:08:38.422 "uuid": "caa297f9-8231-4ec9-b157-0f4982aaa6fe", 00:08:38.422 "assigned_rate_limits": { 00:08:38.422 "rw_ios_per_sec": 0, 00:08:38.422 "rw_mbytes_per_sec": 0, 00:08:38.422 "r_mbytes_per_sec": 0, 00:08:38.422 "w_mbytes_per_sec": 0 00:08:38.422 }, 00:08:38.422 "claimed": true, 00:08:38.422 "claim_type": "exclusive_write", 00:08:38.422 "zoned": false, 00:08:38.422 "supported_io_types": { 00:08:38.422 "read": true, 00:08:38.422 "write": true, 00:08:38.422 "unmap": true, 00:08:38.422 "flush": true, 00:08:38.422 "reset": true, 00:08:38.422 "nvme_admin": false, 00:08:38.422 "nvme_io": false, 00:08:38.422 "nvme_io_md": false, 00:08:38.422 "write_zeroes": true, 00:08:38.422 "zcopy": true, 00:08:38.422 "get_zone_info": false, 00:08:38.422 "zone_management": false, 00:08:38.422 "zone_append": false, 00:08:38.422 "compare": false, 00:08:38.422 "compare_and_write": false, 00:08:38.422 "abort": true, 00:08:38.422 "seek_hole": false, 00:08:38.422 "seek_data": false, 00:08:38.422 "copy": true, 00:08:38.422 "nvme_iov_md": false 00:08:38.422 }, 00:08:38.422 "memory_domains": [ 00:08:38.422 { 00:08:38.422 "dma_device_id": "system", 00:08:38.422 "dma_device_type": 1 00:08:38.422 }, 00:08:38.422 { 00:08:38.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.422 "dma_device_type": 2 00:08:38.422 } 00:08:38.422 ], 00:08:38.422 "driver_specific": {} 00:08:38.422 } 00:08:38.422 ] 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.422 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.422 "name": "Existed_Raid", 00:08:38.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.422 "strip_size_kb": 64, 00:08:38.422 "state": "configuring", 00:08:38.422 "raid_level": "raid0", 00:08:38.422 "superblock": false, 00:08:38.422 "num_base_bdevs": 3, 00:08:38.422 "num_base_bdevs_discovered": 2, 00:08:38.422 "num_base_bdevs_operational": 3, 00:08:38.422 "base_bdevs_list": [ 00:08:38.422 { 00:08:38.422 "name": "BaseBdev1", 00:08:38.422 "uuid": "7e9dfb2f-4404-4b25-a2dd-60138f15385b", 00:08:38.422 "is_configured": true, 00:08:38.422 "data_offset": 0, 00:08:38.422 "data_size": 65536 00:08:38.423 }, 00:08:38.423 { 00:08:38.423 "name": "BaseBdev2", 00:08:38.423 "uuid": "caa297f9-8231-4ec9-b157-0f4982aaa6fe", 00:08:38.423 "is_configured": true, 00:08:38.423 "data_offset": 0, 00:08:38.423 "data_size": 65536 00:08:38.423 }, 00:08:38.423 { 00:08:38.423 "name": "BaseBdev3", 00:08:38.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.423 "is_configured": false, 00:08:38.423 "data_offset": 0, 00:08:38.423 "data_size": 0 00:08:38.423 } 00:08:38.423 ] 00:08:38.423 }' 00:08:38.423 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.423 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.991 10:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.991 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.991 10:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.991 [2024-11-19 10:02:53.006101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.991 [2024-11-19 10:02:53.006170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.991 [2024-11-19 10:02:53.006193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:38.991 [2024-11-19 10:02:53.006565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:38.991 [2024-11-19 10:02:53.006795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.991 [2024-11-19 10:02:53.006854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:38.991 [2024-11-19 10:02:53.007251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.991 BaseBdev3 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.991 [ 00:08:38.991 { 00:08:38.991 "name": "BaseBdev3", 00:08:38.991 "aliases": [ 00:08:38.991 "023965db-3530-4ba9-b8cf-8d04125972a7" 00:08:38.991 ], 00:08:38.991 "product_name": "Malloc disk", 00:08:38.991 "block_size": 512, 00:08:38.991 "num_blocks": 65536, 00:08:38.991 "uuid": "023965db-3530-4ba9-b8cf-8d04125972a7", 00:08:38.991 "assigned_rate_limits": { 00:08:38.991 "rw_ios_per_sec": 0, 00:08:38.991 "rw_mbytes_per_sec": 0, 00:08:38.991 "r_mbytes_per_sec": 0, 00:08:38.991 "w_mbytes_per_sec": 0 00:08:38.991 }, 00:08:38.991 "claimed": true, 00:08:38.991 "claim_type": "exclusive_write", 00:08:38.991 "zoned": false, 00:08:38.991 "supported_io_types": { 00:08:38.991 "read": true, 00:08:38.991 "write": true, 00:08:38.991 "unmap": true, 00:08:38.991 "flush": true, 00:08:38.991 "reset": true, 00:08:38.991 "nvme_admin": false, 00:08:38.991 "nvme_io": false, 00:08:38.991 "nvme_io_md": false, 00:08:38.991 "write_zeroes": true, 00:08:38.991 "zcopy": true, 00:08:38.991 "get_zone_info": false, 00:08:38.991 "zone_management": false, 00:08:38.991 "zone_append": false, 00:08:38.991 "compare": false, 00:08:38.991 "compare_and_write": false, 00:08:38.991 "abort": true, 00:08:38.991 "seek_hole": false, 00:08:38.991 "seek_data": false, 00:08:38.991 "copy": true, 00:08:38.991 "nvme_iov_md": false 00:08:38.991 }, 00:08:38.991 "memory_domains": [ 00:08:38.991 { 00:08:38.991 "dma_device_id": "system", 00:08:38.991 "dma_device_type": 1 00:08:38.991 }, 00:08:38.991 { 00:08:38.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.991 "dma_device_type": 2 00:08:38.991 } 00:08:38.991 ], 00:08:38.991 "driver_specific": {} 00:08:38.991 } 00:08:38.991 ] 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.991 "name": "Existed_Raid", 00:08:38.991 "uuid": "86037166-630e-44d6-b7a4-b39a8e337188", 00:08:38.991 "strip_size_kb": 64, 00:08:38.991 "state": "online", 00:08:38.991 "raid_level": "raid0", 00:08:38.991 "superblock": false, 00:08:38.991 "num_base_bdevs": 3, 00:08:38.991 "num_base_bdevs_discovered": 3, 00:08:38.991 "num_base_bdevs_operational": 3, 00:08:38.991 "base_bdevs_list": [ 00:08:38.991 { 00:08:38.991 "name": "BaseBdev1", 00:08:38.991 "uuid": "7e9dfb2f-4404-4b25-a2dd-60138f15385b", 00:08:38.991 "is_configured": true, 00:08:38.991 "data_offset": 0, 00:08:38.991 "data_size": 65536 00:08:38.991 }, 00:08:38.991 { 00:08:38.991 "name": "BaseBdev2", 00:08:38.991 "uuid": "caa297f9-8231-4ec9-b157-0f4982aaa6fe", 00:08:38.991 "is_configured": true, 00:08:38.991 "data_offset": 0, 00:08:38.991 "data_size": 65536 00:08:38.991 }, 00:08:38.991 { 00:08:38.991 "name": "BaseBdev3", 00:08:38.991 "uuid": "023965db-3530-4ba9-b8cf-8d04125972a7", 00:08:38.991 "is_configured": true, 00:08:38.991 "data_offset": 0, 00:08:38.991 "data_size": 65536 00:08:38.991 } 00:08:38.991 ] 00:08:38.991 }' 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.991 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.560 [2024-11-19 10:02:53.538715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.560 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.560 "name": "Existed_Raid", 00:08:39.560 "aliases": [ 00:08:39.560 "86037166-630e-44d6-b7a4-b39a8e337188" 00:08:39.560 ], 00:08:39.560 "product_name": "Raid Volume", 00:08:39.560 "block_size": 512, 00:08:39.560 "num_blocks": 196608, 00:08:39.560 "uuid": "86037166-630e-44d6-b7a4-b39a8e337188", 00:08:39.560 "assigned_rate_limits": { 00:08:39.560 "rw_ios_per_sec": 0, 00:08:39.560 "rw_mbytes_per_sec": 0, 00:08:39.560 "r_mbytes_per_sec": 0, 00:08:39.560 "w_mbytes_per_sec": 0 00:08:39.560 }, 00:08:39.560 "claimed": false, 00:08:39.560 "zoned": false, 00:08:39.560 "supported_io_types": { 00:08:39.560 "read": true, 00:08:39.560 "write": true, 00:08:39.560 "unmap": true, 00:08:39.560 "flush": true, 00:08:39.560 "reset": true, 00:08:39.560 "nvme_admin": false, 00:08:39.560 "nvme_io": false, 00:08:39.560 "nvme_io_md": false, 00:08:39.560 "write_zeroes": true, 00:08:39.560 "zcopy": false, 00:08:39.560 "get_zone_info": false, 00:08:39.560 "zone_management": false, 00:08:39.560 "zone_append": false, 00:08:39.560 "compare": false, 00:08:39.560 "compare_and_write": false, 00:08:39.560 "abort": false, 00:08:39.560 "seek_hole": false, 00:08:39.560 "seek_data": false, 00:08:39.560 "copy": false, 00:08:39.560 "nvme_iov_md": false 00:08:39.560 }, 00:08:39.560 "memory_domains": [ 00:08:39.560 { 00:08:39.560 "dma_device_id": "system", 00:08:39.560 "dma_device_type": 1 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.560 "dma_device_type": 2 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "dma_device_id": "system", 00:08:39.560 "dma_device_type": 1 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.560 "dma_device_type": 2 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "dma_device_id": "system", 00:08:39.560 "dma_device_type": 1 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.560 "dma_device_type": 2 00:08:39.560 } 00:08:39.560 ], 00:08:39.560 "driver_specific": { 00:08:39.560 "raid": { 00:08:39.560 "uuid": "86037166-630e-44d6-b7a4-b39a8e337188", 00:08:39.560 "strip_size_kb": 64, 00:08:39.560 "state": "online", 00:08:39.560 "raid_level": "raid0", 00:08:39.560 "superblock": false, 00:08:39.560 "num_base_bdevs": 3, 00:08:39.560 "num_base_bdevs_discovered": 3, 00:08:39.560 "num_base_bdevs_operational": 3, 00:08:39.560 "base_bdevs_list": [ 00:08:39.560 { 00:08:39.560 "name": "BaseBdev1", 00:08:39.560 "uuid": "7e9dfb2f-4404-4b25-a2dd-60138f15385b", 00:08:39.560 "is_configured": true, 00:08:39.560 "data_offset": 0, 00:08:39.560 "data_size": 65536 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "name": "BaseBdev2", 00:08:39.560 "uuid": "caa297f9-8231-4ec9-b157-0f4982aaa6fe", 00:08:39.560 "is_configured": true, 00:08:39.560 "data_offset": 0, 00:08:39.560 "data_size": 65536 00:08:39.560 }, 00:08:39.560 { 00:08:39.560 "name": "BaseBdev3", 00:08:39.560 "uuid": "023965db-3530-4ba9-b8cf-8d04125972a7", 00:08:39.560 "is_configured": true, 00:08:39.560 "data_offset": 0, 00:08:39.560 "data_size": 65536 00:08:39.560 } 00:08:39.560 ] 00:08:39.560 } 00:08:39.560 } 00:08:39.560 }' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.561 BaseBdev2 00:08:39.561 BaseBdev3' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.561 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.820 [2024-11-19 10:02:53.858491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.820 [2024-11-19 10:02:53.858532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.820 [2024-11-19 10:02:53.858614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.820 10:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.820 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.820 "name": "Existed_Raid", 00:08:39.820 "uuid": "86037166-630e-44d6-b7a4-b39a8e337188", 00:08:39.820 "strip_size_kb": 64, 00:08:39.820 "state": "offline", 00:08:39.820 "raid_level": "raid0", 00:08:39.820 "superblock": false, 00:08:39.820 "num_base_bdevs": 3, 00:08:39.820 "num_base_bdevs_discovered": 2, 00:08:39.820 "num_base_bdevs_operational": 2, 00:08:39.820 "base_bdevs_list": [ 00:08:39.820 { 00:08:39.820 "name": null, 00:08:39.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.820 "is_configured": false, 00:08:39.820 "data_offset": 0, 00:08:39.820 "data_size": 65536 00:08:39.820 }, 00:08:39.820 { 00:08:39.820 "name": "BaseBdev2", 00:08:39.820 "uuid": "caa297f9-8231-4ec9-b157-0f4982aaa6fe", 00:08:39.820 "is_configured": true, 00:08:39.820 "data_offset": 0, 00:08:39.820 "data_size": 65536 00:08:39.820 }, 00:08:39.820 { 00:08:39.820 "name": "BaseBdev3", 00:08:39.820 "uuid": "023965db-3530-4ba9-b8cf-8d04125972a7", 00:08:39.820 "is_configured": true, 00:08:39.820 "data_offset": 0, 00:08:39.820 "data_size": 65536 00:08:39.820 } 00:08:39.820 ] 00:08:39.820 }' 00:08:39.820 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.820 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.388 [2024-11-19 10:02:54.522654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.388 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.647 [2024-11-19 10:02:54.675443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.647 [2024-11-19 10:02:54.675521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.647 BaseBdev2 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.647 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 [ 00:08:40.907 { 00:08:40.907 "name": "BaseBdev2", 00:08:40.907 "aliases": [ 00:08:40.907 "e350db50-24c5-418e-bffc-6a4470480627" 00:08:40.907 ], 00:08:40.907 "product_name": "Malloc disk", 00:08:40.907 "block_size": 512, 00:08:40.907 "num_blocks": 65536, 00:08:40.907 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:40.907 "assigned_rate_limits": { 00:08:40.907 "rw_ios_per_sec": 0, 00:08:40.907 "rw_mbytes_per_sec": 0, 00:08:40.907 "r_mbytes_per_sec": 0, 00:08:40.907 "w_mbytes_per_sec": 0 00:08:40.907 }, 00:08:40.907 "claimed": false, 00:08:40.907 "zoned": false, 00:08:40.907 "supported_io_types": { 00:08:40.907 "read": true, 00:08:40.907 "write": true, 00:08:40.907 "unmap": true, 00:08:40.907 "flush": true, 00:08:40.907 "reset": true, 00:08:40.907 "nvme_admin": false, 00:08:40.907 "nvme_io": false, 00:08:40.907 "nvme_io_md": false, 00:08:40.907 "write_zeroes": true, 00:08:40.907 "zcopy": true, 00:08:40.907 "get_zone_info": false, 00:08:40.907 "zone_management": false, 00:08:40.907 "zone_append": false, 00:08:40.907 "compare": false, 00:08:40.907 "compare_and_write": false, 00:08:40.907 "abort": true, 00:08:40.907 "seek_hole": false, 00:08:40.907 "seek_data": false, 00:08:40.907 "copy": true, 00:08:40.907 "nvme_iov_md": false 00:08:40.907 }, 00:08:40.907 "memory_domains": [ 00:08:40.907 { 00:08:40.907 "dma_device_id": "system", 00:08:40.907 "dma_device_type": 1 00:08:40.907 }, 00:08:40.907 { 00:08:40.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.907 "dma_device_type": 2 00:08:40.907 } 00:08:40.907 ], 00:08:40.907 "driver_specific": {} 00:08:40.907 } 00:08:40.907 ] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 BaseBdev3 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 [ 00:08:40.907 { 00:08:40.907 "name": "BaseBdev3", 00:08:40.907 "aliases": [ 00:08:40.907 "721ced21-4a0b-44db-bbf3-aa5888193dee" 00:08:40.907 ], 00:08:40.907 "product_name": "Malloc disk", 00:08:40.907 "block_size": 512, 00:08:40.907 "num_blocks": 65536, 00:08:40.907 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:40.907 "assigned_rate_limits": { 00:08:40.907 "rw_ios_per_sec": 0, 00:08:40.907 "rw_mbytes_per_sec": 0, 00:08:40.907 "r_mbytes_per_sec": 0, 00:08:40.907 "w_mbytes_per_sec": 0 00:08:40.907 }, 00:08:40.907 "claimed": false, 00:08:40.907 "zoned": false, 00:08:40.907 "supported_io_types": { 00:08:40.907 "read": true, 00:08:40.907 "write": true, 00:08:40.907 "unmap": true, 00:08:40.907 "flush": true, 00:08:40.907 "reset": true, 00:08:40.907 "nvme_admin": false, 00:08:40.907 "nvme_io": false, 00:08:40.907 "nvme_io_md": false, 00:08:40.907 "write_zeroes": true, 00:08:40.907 "zcopy": true, 00:08:40.907 "get_zone_info": false, 00:08:40.907 "zone_management": false, 00:08:40.907 "zone_append": false, 00:08:40.907 "compare": false, 00:08:40.907 "compare_and_write": false, 00:08:40.907 "abort": true, 00:08:40.907 "seek_hole": false, 00:08:40.907 "seek_data": false, 00:08:40.907 "copy": true, 00:08:40.907 "nvme_iov_md": false 00:08:40.907 }, 00:08:40.907 "memory_domains": [ 00:08:40.907 { 00:08:40.907 "dma_device_id": "system", 00:08:40.907 "dma_device_type": 1 00:08:40.907 }, 00:08:40.907 { 00:08:40.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.907 "dma_device_type": 2 00:08:40.907 } 00:08:40.907 ], 00:08:40.907 "driver_specific": {} 00:08:40.907 } 00:08:40.907 ] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.907 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.908 [2024-11-19 10:02:54.993758] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.908 [2024-11-19 10:02:54.993834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.908 [2024-11-19 10:02:54.993871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.908 [2024-11-19 10:02:54.996511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.908 10:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.908 "name": "Existed_Raid", 00:08:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.908 "strip_size_kb": 64, 00:08:40.908 "state": "configuring", 00:08:40.908 "raid_level": "raid0", 00:08:40.908 "superblock": false, 00:08:40.908 "num_base_bdevs": 3, 00:08:40.908 "num_base_bdevs_discovered": 2, 00:08:40.908 "num_base_bdevs_operational": 3, 00:08:40.908 "base_bdevs_list": [ 00:08:40.908 { 00:08:40.908 "name": "BaseBdev1", 00:08:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.908 "is_configured": false, 00:08:40.908 "data_offset": 0, 00:08:40.908 "data_size": 0 00:08:40.908 }, 00:08:40.908 { 00:08:40.908 "name": "BaseBdev2", 00:08:40.908 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:40.908 "is_configured": true, 00:08:40.908 "data_offset": 0, 00:08:40.908 "data_size": 65536 00:08:40.908 }, 00:08:40.908 { 00:08:40.908 "name": "BaseBdev3", 00:08:40.908 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:40.908 "is_configured": true, 00:08:40.908 "data_offset": 0, 00:08:40.908 "data_size": 65536 00:08:40.908 } 00:08:40.908 ] 00:08:40.908 }' 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.908 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.475 [2024-11-19 10:02:55.517934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.475 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.476 "name": "Existed_Raid", 00:08:41.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.476 "strip_size_kb": 64, 00:08:41.476 "state": "configuring", 00:08:41.476 "raid_level": "raid0", 00:08:41.476 "superblock": false, 00:08:41.476 "num_base_bdevs": 3, 00:08:41.476 "num_base_bdevs_discovered": 1, 00:08:41.476 "num_base_bdevs_operational": 3, 00:08:41.476 "base_bdevs_list": [ 00:08:41.476 { 00:08:41.476 "name": "BaseBdev1", 00:08:41.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.476 "is_configured": false, 00:08:41.476 "data_offset": 0, 00:08:41.476 "data_size": 0 00:08:41.476 }, 00:08:41.476 { 00:08:41.476 "name": null, 00:08:41.476 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:41.476 "is_configured": false, 00:08:41.476 "data_offset": 0, 00:08:41.476 "data_size": 65536 00:08:41.476 }, 00:08:41.476 { 00:08:41.476 "name": "BaseBdev3", 00:08:41.476 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:41.476 "is_configured": true, 00:08:41.476 "data_offset": 0, 00:08:41.476 "data_size": 65536 00:08:41.476 } 00:08:41.476 ] 00:08:41.476 }' 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.476 10:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 [2024-11-19 10:02:56.145585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.044 BaseBdev1 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 [ 00:08:42.044 { 00:08:42.044 "name": "BaseBdev1", 00:08:42.044 "aliases": [ 00:08:42.044 "4dbdaf30-f574-4568-9150-b0a8979e868f" 00:08:42.044 ], 00:08:42.044 "product_name": "Malloc disk", 00:08:42.044 "block_size": 512, 00:08:42.044 "num_blocks": 65536, 00:08:42.044 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:42.044 "assigned_rate_limits": { 00:08:42.044 "rw_ios_per_sec": 0, 00:08:42.044 "rw_mbytes_per_sec": 0, 00:08:42.044 "r_mbytes_per_sec": 0, 00:08:42.044 "w_mbytes_per_sec": 0 00:08:42.044 }, 00:08:42.044 "claimed": true, 00:08:42.044 "claim_type": "exclusive_write", 00:08:42.044 "zoned": false, 00:08:42.044 "supported_io_types": { 00:08:42.044 "read": true, 00:08:42.044 "write": true, 00:08:42.044 "unmap": true, 00:08:42.044 "flush": true, 00:08:42.044 "reset": true, 00:08:42.044 "nvme_admin": false, 00:08:42.044 "nvme_io": false, 00:08:42.044 "nvme_io_md": false, 00:08:42.044 "write_zeroes": true, 00:08:42.044 "zcopy": true, 00:08:42.044 "get_zone_info": false, 00:08:42.044 "zone_management": false, 00:08:42.044 "zone_append": false, 00:08:42.044 "compare": false, 00:08:42.044 "compare_and_write": false, 00:08:42.044 "abort": true, 00:08:42.044 "seek_hole": false, 00:08:42.044 "seek_data": false, 00:08:42.044 "copy": true, 00:08:42.044 "nvme_iov_md": false 00:08:42.044 }, 00:08:42.044 "memory_domains": [ 00:08:42.044 { 00:08:42.044 "dma_device_id": "system", 00:08:42.044 "dma_device_type": 1 00:08:42.044 }, 00:08:42.044 { 00:08:42.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.044 "dma_device_type": 2 00:08:42.044 } 00:08:42.044 ], 00:08:42.044 "driver_specific": {} 00:08:42.044 } 00:08:42.044 ] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.044 "name": "Existed_Raid", 00:08:42.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.044 "strip_size_kb": 64, 00:08:42.044 "state": "configuring", 00:08:42.044 "raid_level": "raid0", 00:08:42.044 "superblock": false, 00:08:42.044 "num_base_bdevs": 3, 00:08:42.044 "num_base_bdevs_discovered": 2, 00:08:42.044 "num_base_bdevs_operational": 3, 00:08:42.044 "base_bdevs_list": [ 00:08:42.044 { 00:08:42.044 "name": "BaseBdev1", 00:08:42.044 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:42.044 "is_configured": true, 00:08:42.044 "data_offset": 0, 00:08:42.044 "data_size": 65536 00:08:42.044 }, 00:08:42.044 { 00:08:42.044 "name": null, 00:08:42.044 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:42.044 "is_configured": false, 00:08:42.044 "data_offset": 0, 00:08:42.044 "data_size": 65536 00:08:42.044 }, 00:08:42.044 { 00:08:42.044 "name": "BaseBdev3", 00:08:42.044 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:42.044 "is_configured": true, 00:08:42.044 "data_offset": 0, 00:08:42.044 "data_size": 65536 00:08:42.044 } 00:08:42.044 ] 00:08:42.044 }' 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.044 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 [2024-11-19 10:02:56.765901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.612 "name": "Existed_Raid", 00:08:42.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.612 "strip_size_kb": 64, 00:08:42.612 "state": "configuring", 00:08:42.612 "raid_level": "raid0", 00:08:42.612 "superblock": false, 00:08:42.612 "num_base_bdevs": 3, 00:08:42.612 "num_base_bdevs_discovered": 1, 00:08:42.612 "num_base_bdevs_operational": 3, 00:08:42.612 "base_bdevs_list": [ 00:08:42.612 { 00:08:42.612 "name": "BaseBdev1", 00:08:42.612 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:42.612 "is_configured": true, 00:08:42.612 "data_offset": 0, 00:08:42.612 "data_size": 65536 00:08:42.612 }, 00:08:42.612 { 00:08:42.612 "name": null, 00:08:42.612 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:42.612 "is_configured": false, 00:08:42.612 "data_offset": 0, 00:08:42.612 "data_size": 65536 00:08:42.612 }, 00:08:42.612 { 00:08:42.612 "name": null, 00:08:42.612 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:42.612 "is_configured": false, 00:08:42.612 "data_offset": 0, 00:08:42.612 "data_size": 65536 00:08:42.612 } 00:08:42.612 ] 00:08:42.612 }' 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.612 10:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.216 [2024-11-19 10:02:57.390084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.216 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.501 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.501 "name": "Existed_Raid", 00:08:43.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.501 "strip_size_kb": 64, 00:08:43.501 "state": "configuring", 00:08:43.501 "raid_level": "raid0", 00:08:43.501 "superblock": false, 00:08:43.501 "num_base_bdevs": 3, 00:08:43.501 "num_base_bdevs_discovered": 2, 00:08:43.501 "num_base_bdevs_operational": 3, 00:08:43.501 "base_bdevs_list": [ 00:08:43.501 { 00:08:43.501 "name": "BaseBdev1", 00:08:43.501 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:43.501 "is_configured": true, 00:08:43.501 "data_offset": 0, 00:08:43.501 "data_size": 65536 00:08:43.501 }, 00:08:43.501 { 00:08:43.501 "name": null, 00:08:43.501 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:43.501 "is_configured": false, 00:08:43.501 "data_offset": 0, 00:08:43.501 "data_size": 65536 00:08:43.501 }, 00:08:43.501 { 00:08:43.501 "name": "BaseBdev3", 00:08:43.501 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:43.501 "is_configured": true, 00:08:43.501 "data_offset": 0, 00:08:43.501 "data_size": 65536 00:08:43.501 } 00:08:43.501 ] 00:08:43.501 }' 00:08:43.501 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.501 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.770 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.770 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.770 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.770 10:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.770 10:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.030 [2024-11-19 10:02:58.078332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.030 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.030 "name": "Existed_Raid", 00:08:44.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.031 "strip_size_kb": 64, 00:08:44.031 "state": "configuring", 00:08:44.031 "raid_level": "raid0", 00:08:44.031 "superblock": false, 00:08:44.031 "num_base_bdevs": 3, 00:08:44.031 "num_base_bdevs_discovered": 1, 00:08:44.031 "num_base_bdevs_operational": 3, 00:08:44.031 "base_bdevs_list": [ 00:08:44.031 { 00:08:44.031 "name": null, 00:08:44.031 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:44.031 "is_configured": false, 00:08:44.031 "data_offset": 0, 00:08:44.031 "data_size": 65536 00:08:44.031 }, 00:08:44.031 { 00:08:44.031 "name": null, 00:08:44.031 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:44.031 "is_configured": false, 00:08:44.031 "data_offset": 0, 00:08:44.031 "data_size": 65536 00:08:44.031 }, 00:08:44.031 { 00:08:44.031 "name": "BaseBdev3", 00:08:44.031 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:44.031 "is_configured": true, 00:08:44.031 "data_offset": 0, 00:08:44.031 "data_size": 65536 00:08:44.031 } 00:08:44.031 ] 00:08:44.031 }' 00:08:44.031 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.031 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.600 [2024-11-19 10:02:58.719823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.600 "name": "Existed_Raid", 00:08:44.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.600 "strip_size_kb": 64, 00:08:44.600 "state": "configuring", 00:08:44.600 "raid_level": "raid0", 00:08:44.600 "superblock": false, 00:08:44.600 "num_base_bdevs": 3, 00:08:44.600 "num_base_bdevs_discovered": 2, 00:08:44.600 "num_base_bdevs_operational": 3, 00:08:44.600 "base_bdevs_list": [ 00:08:44.600 { 00:08:44.600 "name": null, 00:08:44.600 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:44.600 "is_configured": false, 00:08:44.600 "data_offset": 0, 00:08:44.600 "data_size": 65536 00:08:44.600 }, 00:08:44.600 { 00:08:44.600 "name": "BaseBdev2", 00:08:44.600 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:44.600 "is_configured": true, 00:08:44.600 "data_offset": 0, 00:08:44.600 "data_size": 65536 00:08:44.600 }, 00:08:44.600 { 00:08:44.600 "name": "BaseBdev3", 00:08:44.600 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:44.600 "is_configured": true, 00:08:44.600 "data_offset": 0, 00:08:44.600 "data_size": 65536 00:08:44.600 } 00:08:44.600 ] 00:08:44.600 }' 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.600 10:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4dbdaf30-f574-4568-9150-b0a8979e868f 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 [2024-11-19 10:02:59.346639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:45.167 [2024-11-19 10:02:59.347026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:45.167 [2024-11-19 10:02:59.347072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:45.167 [2024-11-19 10:02:59.347445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:45.167 [2024-11-19 10:02:59.347678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:45.167 [2024-11-19 10:02:59.347695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:45.167 [2024-11-19 10:02:59.348077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.167 NewBaseBdev 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 [ 00:08:45.167 { 00:08:45.167 "name": "NewBaseBdev", 00:08:45.167 "aliases": [ 00:08:45.167 "4dbdaf30-f574-4568-9150-b0a8979e868f" 00:08:45.167 ], 00:08:45.167 "product_name": "Malloc disk", 00:08:45.167 "block_size": 512, 00:08:45.167 "num_blocks": 65536, 00:08:45.167 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:45.167 "assigned_rate_limits": { 00:08:45.167 "rw_ios_per_sec": 0, 00:08:45.167 "rw_mbytes_per_sec": 0, 00:08:45.167 "r_mbytes_per_sec": 0, 00:08:45.167 "w_mbytes_per_sec": 0 00:08:45.167 }, 00:08:45.167 "claimed": true, 00:08:45.167 "claim_type": "exclusive_write", 00:08:45.167 "zoned": false, 00:08:45.167 "supported_io_types": { 00:08:45.167 "read": true, 00:08:45.167 "write": true, 00:08:45.167 "unmap": true, 00:08:45.167 "flush": true, 00:08:45.167 "reset": true, 00:08:45.167 "nvme_admin": false, 00:08:45.167 "nvme_io": false, 00:08:45.167 "nvme_io_md": false, 00:08:45.167 "write_zeroes": true, 00:08:45.167 "zcopy": true, 00:08:45.167 "get_zone_info": false, 00:08:45.167 "zone_management": false, 00:08:45.167 "zone_append": false, 00:08:45.167 "compare": false, 00:08:45.167 "compare_and_write": false, 00:08:45.167 "abort": true, 00:08:45.167 "seek_hole": false, 00:08:45.167 "seek_data": false, 00:08:45.167 "copy": true, 00:08:45.167 "nvme_iov_md": false 00:08:45.167 }, 00:08:45.167 "memory_domains": [ 00:08:45.167 { 00:08:45.167 "dma_device_id": "system", 00:08:45.167 "dma_device_type": 1 00:08:45.167 }, 00:08:45.167 { 00:08:45.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.167 "dma_device_type": 2 00:08:45.167 } 00:08:45.167 ], 00:08:45.167 "driver_specific": {} 00:08:45.167 } 00:08:45.167 ] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.167 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.168 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.426 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.426 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.426 "name": "Existed_Raid", 00:08:45.426 "uuid": "2c99c4f5-f15b-4899-807d-70841c4b7cb4", 00:08:45.426 "strip_size_kb": 64, 00:08:45.426 "state": "online", 00:08:45.426 "raid_level": "raid0", 00:08:45.426 "superblock": false, 00:08:45.426 "num_base_bdevs": 3, 00:08:45.426 "num_base_bdevs_discovered": 3, 00:08:45.426 "num_base_bdevs_operational": 3, 00:08:45.426 "base_bdevs_list": [ 00:08:45.426 { 00:08:45.426 "name": "NewBaseBdev", 00:08:45.426 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:45.426 "is_configured": true, 00:08:45.426 "data_offset": 0, 00:08:45.426 "data_size": 65536 00:08:45.426 }, 00:08:45.426 { 00:08:45.426 "name": "BaseBdev2", 00:08:45.426 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:45.426 "is_configured": true, 00:08:45.426 "data_offset": 0, 00:08:45.426 "data_size": 65536 00:08:45.426 }, 00:08:45.426 { 00:08:45.426 "name": "BaseBdev3", 00:08:45.426 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:45.426 "is_configured": true, 00:08:45.426 "data_offset": 0, 00:08:45.426 "data_size": 65536 00:08:45.426 } 00:08:45.426 ] 00:08:45.426 }' 00:08:45.426 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.426 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.684 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.684 [2024-11-19 10:02:59.899271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.943 10:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.943 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.943 "name": "Existed_Raid", 00:08:45.943 "aliases": [ 00:08:45.943 "2c99c4f5-f15b-4899-807d-70841c4b7cb4" 00:08:45.943 ], 00:08:45.943 "product_name": "Raid Volume", 00:08:45.943 "block_size": 512, 00:08:45.943 "num_blocks": 196608, 00:08:45.943 "uuid": "2c99c4f5-f15b-4899-807d-70841c4b7cb4", 00:08:45.943 "assigned_rate_limits": { 00:08:45.943 "rw_ios_per_sec": 0, 00:08:45.943 "rw_mbytes_per_sec": 0, 00:08:45.943 "r_mbytes_per_sec": 0, 00:08:45.943 "w_mbytes_per_sec": 0 00:08:45.943 }, 00:08:45.943 "claimed": false, 00:08:45.943 "zoned": false, 00:08:45.943 "supported_io_types": { 00:08:45.943 "read": true, 00:08:45.943 "write": true, 00:08:45.943 "unmap": true, 00:08:45.943 "flush": true, 00:08:45.943 "reset": true, 00:08:45.943 "nvme_admin": false, 00:08:45.943 "nvme_io": false, 00:08:45.943 "nvme_io_md": false, 00:08:45.943 "write_zeroes": true, 00:08:45.943 "zcopy": false, 00:08:45.944 "get_zone_info": false, 00:08:45.944 "zone_management": false, 00:08:45.944 "zone_append": false, 00:08:45.944 "compare": false, 00:08:45.944 "compare_and_write": false, 00:08:45.944 "abort": false, 00:08:45.944 "seek_hole": false, 00:08:45.944 "seek_data": false, 00:08:45.944 "copy": false, 00:08:45.944 "nvme_iov_md": false 00:08:45.944 }, 00:08:45.944 "memory_domains": [ 00:08:45.944 { 00:08:45.944 "dma_device_id": "system", 00:08:45.944 "dma_device_type": 1 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.944 "dma_device_type": 2 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "dma_device_id": "system", 00:08:45.944 "dma_device_type": 1 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.944 "dma_device_type": 2 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "dma_device_id": "system", 00:08:45.944 "dma_device_type": 1 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.944 "dma_device_type": 2 00:08:45.944 } 00:08:45.944 ], 00:08:45.944 "driver_specific": { 00:08:45.944 "raid": { 00:08:45.944 "uuid": "2c99c4f5-f15b-4899-807d-70841c4b7cb4", 00:08:45.944 "strip_size_kb": 64, 00:08:45.944 "state": "online", 00:08:45.944 "raid_level": "raid0", 00:08:45.944 "superblock": false, 00:08:45.944 "num_base_bdevs": 3, 00:08:45.944 "num_base_bdevs_discovered": 3, 00:08:45.944 "num_base_bdevs_operational": 3, 00:08:45.944 "base_bdevs_list": [ 00:08:45.944 { 00:08:45.944 "name": "NewBaseBdev", 00:08:45.944 "uuid": "4dbdaf30-f574-4568-9150-b0a8979e868f", 00:08:45.944 "is_configured": true, 00:08:45.944 "data_offset": 0, 00:08:45.944 "data_size": 65536 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "name": "BaseBdev2", 00:08:45.944 "uuid": "e350db50-24c5-418e-bffc-6a4470480627", 00:08:45.944 "is_configured": true, 00:08:45.944 "data_offset": 0, 00:08:45.944 "data_size": 65536 00:08:45.944 }, 00:08:45.944 { 00:08:45.944 "name": "BaseBdev3", 00:08:45.944 "uuid": "721ced21-4a0b-44db-bbf3-aa5888193dee", 00:08:45.944 "is_configured": true, 00:08:45.944 "data_offset": 0, 00:08:45.944 "data_size": 65536 00:08:45.944 } 00:08:45.944 ] 00:08:45.944 } 00:08:45.944 } 00:08:45.944 }' 00:08:45.944 10:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:45.944 BaseBdev2 00:08:45.944 BaseBdev3' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.944 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.203 [2024-11-19 10:03:00.242984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.203 [2024-11-19 10:03:00.243025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.203 [2024-11-19 10:03:00.243148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.203 [2024-11-19 10:03:00.243234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.203 [2024-11-19 10:03:00.243257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63659 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63659 ']' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63659 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63659 00:08:46.203 killing process with pid 63659 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63659' 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63659 00:08:46.203 10:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63659 00:08:46.203 [2024-11-19 10:03:00.281362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.463 [2024-11-19 10:03:00.578469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:47.843 00:08:47.843 real 0m12.119s 00:08:47.843 user 0m19.916s 00:08:47.843 sys 0m1.738s 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.843 ************************************ 00:08:47.843 END TEST raid_state_function_test 00:08:47.843 ************************************ 00:08:47.843 10:03:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:47.843 10:03:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.843 10:03:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.843 10:03:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.843 ************************************ 00:08:47.843 START TEST raid_state_function_test_sb 00:08:47.843 ************************************ 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64297 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64297' 00:08:47.843 Process raid pid: 64297 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64297 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64297 ']' 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.843 10:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.843 [2024-11-19 10:03:01.878880] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:08:47.843 [2024-11-19 10:03:01.879065] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.843 [2024-11-19 10:03:02.059430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.102 [2024-11-19 10:03:02.207392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.361 [2024-11-19 10:03:02.437257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.361 [2024-11-19 10:03:02.437321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.929 [2024-11-19 10:03:02.894566] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.929 [2024-11-19 10:03:02.894642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.929 [2024-11-19 10:03:02.894661] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.929 [2024-11-19 10:03:02.894679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.929 [2024-11-19 10:03:02.894689] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.929 [2024-11-19 10:03:02.894705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.929 "name": "Existed_Raid", 00:08:48.929 "uuid": "b96fa193-f793-403d-9a61-97c29d92d34c", 00:08:48.929 "strip_size_kb": 64, 00:08:48.929 "state": "configuring", 00:08:48.929 "raid_level": "raid0", 00:08:48.929 "superblock": true, 00:08:48.929 "num_base_bdevs": 3, 00:08:48.929 "num_base_bdevs_discovered": 0, 00:08:48.929 "num_base_bdevs_operational": 3, 00:08:48.929 "base_bdevs_list": [ 00:08:48.929 { 00:08:48.929 "name": "BaseBdev1", 00:08:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.929 "is_configured": false, 00:08:48.929 "data_offset": 0, 00:08:48.929 "data_size": 0 00:08:48.929 }, 00:08:48.929 { 00:08:48.929 "name": "BaseBdev2", 00:08:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.929 "is_configured": false, 00:08:48.929 "data_offset": 0, 00:08:48.929 "data_size": 0 00:08:48.929 }, 00:08:48.929 { 00:08:48.929 "name": "BaseBdev3", 00:08:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.929 "is_configured": false, 00:08:48.929 "data_offset": 0, 00:08:48.929 "data_size": 0 00:08:48.929 } 00:08:48.929 ] 00:08:48.929 }' 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.929 10:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 [2024-11-19 10:03:03.430607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.498 [2024-11-19 10:03:03.430660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 [2024-11-19 10:03:03.438589] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.498 [2024-11-19 10:03:03.438652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.498 [2024-11-19 10:03:03.438668] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.498 [2024-11-19 10:03:03.438685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.498 [2024-11-19 10:03:03.438695] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.498 [2024-11-19 10:03:03.438710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 [2024-11-19 10:03:03.487231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.498 BaseBdev1 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 [ 00:08:49.498 { 00:08:49.498 "name": "BaseBdev1", 00:08:49.498 "aliases": [ 00:08:49.498 "aae669f7-5cd1-4486-8418-e39b43f1f48a" 00:08:49.498 ], 00:08:49.498 "product_name": "Malloc disk", 00:08:49.498 "block_size": 512, 00:08:49.498 "num_blocks": 65536, 00:08:49.498 "uuid": "aae669f7-5cd1-4486-8418-e39b43f1f48a", 00:08:49.498 "assigned_rate_limits": { 00:08:49.498 "rw_ios_per_sec": 0, 00:08:49.498 "rw_mbytes_per_sec": 0, 00:08:49.498 "r_mbytes_per_sec": 0, 00:08:49.498 "w_mbytes_per_sec": 0 00:08:49.498 }, 00:08:49.498 "claimed": true, 00:08:49.498 "claim_type": "exclusive_write", 00:08:49.498 "zoned": false, 00:08:49.498 "supported_io_types": { 00:08:49.498 "read": true, 00:08:49.498 "write": true, 00:08:49.498 "unmap": true, 00:08:49.498 "flush": true, 00:08:49.498 "reset": true, 00:08:49.498 "nvme_admin": false, 00:08:49.498 "nvme_io": false, 00:08:49.498 "nvme_io_md": false, 00:08:49.498 "write_zeroes": true, 00:08:49.498 "zcopy": true, 00:08:49.498 "get_zone_info": false, 00:08:49.498 "zone_management": false, 00:08:49.498 "zone_append": false, 00:08:49.498 "compare": false, 00:08:49.498 "compare_and_write": false, 00:08:49.498 "abort": true, 00:08:49.498 "seek_hole": false, 00:08:49.498 "seek_data": false, 00:08:49.498 "copy": true, 00:08:49.498 "nvme_iov_md": false 00:08:49.498 }, 00:08:49.498 "memory_domains": [ 00:08:49.498 { 00:08:49.498 "dma_device_id": "system", 00:08:49.498 "dma_device_type": 1 00:08:49.498 }, 00:08:49.498 { 00:08:49.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.498 "dma_device_type": 2 00:08:49.498 } 00:08:49.498 ], 00:08:49.498 "driver_specific": {} 00:08:49.498 } 00:08:49.498 ] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.498 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.499 "name": "Existed_Raid", 00:08:49.499 "uuid": "64528ed2-cfe3-4c1c-838c-1eb870056cd6", 00:08:49.499 "strip_size_kb": 64, 00:08:49.499 "state": "configuring", 00:08:49.499 "raid_level": "raid0", 00:08:49.499 "superblock": true, 00:08:49.499 "num_base_bdevs": 3, 00:08:49.499 "num_base_bdevs_discovered": 1, 00:08:49.499 "num_base_bdevs_operational": 3, 00:08:49.499 "base_bdevs_list": [ 00:08:49.499 { 00:08:49.499 "name": "BaseBdev1", 00:08:49.499 "uuid": "aae669f7-5cd1-4486-8418-e39b43f1f48a", 00:08:49.499 "is_configured": true, 00:08:49.499 "data_offset": 2048, 00:08:49.499 "data_size": 63488 00:08:49.499 }, 00:08:49.499 { 00:08:49.499 "name": "BaseBdev2", 00:08:49.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.499 "is_configured": false, 00:08:49.499 "data_offset": 0, 00:08:49.499 "data_size": 0 00:08:49.499 }, 00:08:49.499 { 00:08:49.499 "name": "BaseBdev3", 00:08:49.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.499 "is_configured": false, 00:08:49.499 "data_offset": 0, 00:08:49.499 "data_size": 0 00:08:49.499 } 00:08:49.499 ] 00:08:49.499 }' 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.499 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.069 10:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.069 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.069 10:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.069 [2024-11-19 10:03:04.003418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.069 [2024-11-19 10:03:04.003500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.069 [2024-11-19 10:03:04.011501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.069 [2024-11-19 10:03:04.014184] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.069 [2024-11-19 10:03:04.014248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.069 [2024-11-19 10:03:04.014266] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.069 [2024-11-19 10:03:04.014282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.069 "name": "Existed_Raid", 00:08:50.069 "uuid": "2058db1f-5ab1-491b-9f9e-9130e1ce02eb", 00:08:50.069 "strip_size_kb": 64, 00:08:50.069 "state": "configuring", 00:08:50.069 "raid_level": "raid0", 00:08:50.069 "superblock": true, 00:08:50.069 "num_base_bdevs": 3, 00:08:50.069 "num_base_bdevs_discovered": 1, 00:08:50.069 "num_base_bdevs_operational": 3, 00:08:50.069 "base_bdevs_list": [ 00:08:50.069 { 00:08:50.069 "name": "BaseBdev1", 00:08:50.069 "uuid": "aae669f7-5cd1-4486-8418-e39b43f1f48a", 00:08:50.069 "is_configured": true, 00:08:50.069 "data_offset": 2048, 00:08:50.069 "data_size": 63488 00:08:50.069 }, 00:08:50.069 { 00:08:50.069 "name": "BaseBdev2", 00:08:50.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.069 "is_configured": false, 00:08:50.069 "data_offset": 0, 00:08:50.069 "data_size": 0 00:08:50.069 }, 00:08:50.069 { 00:08:50.069 "name": "BaseBdev3", 00:08:50.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.069 "is_configured": false, 00:08:50.069 "data_offset": 0, 00:08:50.069 "data_size": 0 00:08:50.069 } 00:08:50.069 ] 00:08:50.069 }' 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.069 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.328 [2024-11-19 10:03:04.541751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.328 BaseBdev2 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.328 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.587 [ 00:08:50.587 { 00:08:50.587 "name": "BaseBdev2", 00:08:50.587 "aliases": [ 00:08:50.587 "dfdfd370-5993-4120-816d-b5d9d16ceba2" 00:08:50.587 ], 00:08:50.587 "product_name": "Malloc disk", 00:08:50.587 "block_size": 512, 00:08:50.587 "num_blocks": 65536, 00:08:50.587 "uuid": "dfdfd370-5993-4120-816d-b5d9d16ceba2", 00:08:50.587 "assigned_rate_limits": { 00:08:50.587 "rw_ios_per_sec": 0, 00:08:50.587 "rw_mbytes_per_sec": 0, 00:08:50.587 "r_mbytes_per_sec": 0, 00:08:50.587 "w_mbytes_per_sec": 0 00:08:50.587 }, 00:08:50.587 "claimed": true, 00:08:50.587 "claim_type": "exclusive_write", 00:08:50.587 "zoned": false, 00:08:50.587 "supported_io_types": { 00:08:50.587 "read": true, 00:08:50.587 "write": true, 00:08:50.587 "unmap": true, 00:08:50.587 "flush": true, 00:08:50.587 "reset": true, 00:08:50.587 "nvme_admin": false, 00:08:50.587 "nvme_io": false, 00:08:50.587 "nvme_io_md": false, 00:08:50.587 "write_zeroes": true, 00:08:50.587 "zcopy": true, 00:08:50.587 "get_zone_info": false, 00:08:50.587 "zone_management": false, 00:08:50.587 "zone_append": false, 00:08:50.587 "compare": false, 00:08:50.587 "compare_and_write": false, 00:08:50.587 "abort": true, 00:08:50.587 "seek_hole": false, 00:08:50.587 "seek_data": false, 00:08:50.587 "copy": true, 00:08:50.587 "nvme_iov_md": false 00:08:50.587 }, 00:08:50.588 "memory_domains": [ 00:08:50.588 { 00:08:50.588 "dma_device_id": "system", 00:08:50.588 "dma_device_type": 1 00:08:50.588 }, 00:08:50.588 { 00:08:50.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.588 "dma_device_type": 2 00:08:50.588 } 00:08:50.588 ], 00:08:50.588 "driver_specific": {} 00:08:50.588 } 00:08:50.588 ] 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.588 "name": "Existed_Raid", 00:08:50.588 "uuid": "2058db1f-5ab1-491b-9f9e-9130e1ce02eb", 00:08:50.588 "strip_size_kb": 64, 00:08:50.588 "state": "configuring", 00:08:50.588 "raid_level": "raid0", 00:08:50.588 "superblock": true, 00:08:50.588 "num_base_bdevs": 3, 00:08:50.588 "num_base_bdevs_discovered": 2, 00:08:50.588 "num_base_bdevs_operational": 3, 00:08:50.588 "base_bdevs_list": [ 00:08:50.588 { 00:08:50.588 "name": "BaseBdev1", 00:08:50.588 "uuid": "aae669f7-5cd1-4486-8418-e39b43f1f48a", 00:08:50.588 "is_configured": true, 00:08:50.588 "data_offset": 2048, 00:08:50.588 "data_size": 63488 00:08:50.588 }, 00:08:50.588 { 00:08:50.588 "name": "BaseBdev2", 00:08:50.588 "uuid": "dfdfd370-5993-4120-816d-b5d9d16ceba2", 00:08:50.588 "is_configured": true, 00:08:50.588 "data_offset": 2048, 00:08:50.588 "data_size": 63488 00:08:50.588 }, 00:08:50.588 { 00:08:50.588 "name": "BaseBdev3", 00:08:50.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.588 "is_configured": false, 00:08:50.588 "data_offset": 0, 00:08:50.588 "data_size": 0 00:08:50.588 } 00:08:50.588 ] 00:08:50.588 }' 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.588 10:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.158 [2024-11-19 10:03:05.155475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.158 [2024-11-19 10:03:05.155872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.158 [2024-11-19 10:03:05.155904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.158 BaseBdev3 00:08:51.158 [2024-11-19 10:03:05.156278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:51.158 [2024-11-19 10:03:05.156488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.158 [2024-11-19 10:03:05.156505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:51.158 [2024-11-19 10:03:05.156702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.158 [ 00:08:51.158 { 00:08:51.158 "name": "BaseBdev3", 00:08:51.158 "aliases": [ 00:08:51.158 "cb5d9e58-b728-481e-b4fb-617b5cfb56fa" 00:08:51.158 ], 00:08:51.158 "product_name": "Malloc disk", 00:08:51.158 "block_size": 512, 00:08:51.158 "num_blocks": 65536, 00:08:51.158 "uuid": "cb5d9e58-b728-481e-b4fb-617b5cfb56fa", 00:08:51.158 "assigned_rate_limits": { 00:08:51.158 "rw_ios_per_sec": 0, 00:08:51.158 "rw_mbytes_per_sec": 0, 00:08:51.158 "r_mbytes_per_sec": 0, 00:08:51.158 "w_mbytes_per_sec": 0 00:08:51.158 }, 00:08:51.158 "claimed": true, 00:08:51.158 "claim_type": "exclusive_write", 00:08:51.158 "zoned": false, 00:08:51.158 "supported_io_types": { 00:08:51.158 "read": true, 00:08:51.158 "write": true, 00:08:51.158 "unmap": true, 00:08:51.158 "flush": true, 00:08:51.158 "reset": true, 00:08:51.158 "nvme_admin": false, 00:08:51.158 "nvme_io": false, 00:08:51.158 "nvme_io_md": false, 00:08:51.158 "write_zeroes": true, 00:08:51.158 "zcopy": true, 00:08:51.158 "get_zone_info": false, 00:08:51.158 "zone_management": false, 00:08:51.158 "zone_append": false, 00:08:51.158 "compare": false, 00:08:51.158 "compare_and_write": false, 00:08:51.158 "abort": true, 00:08:51.158 "seek_hole": false, 00:08:51.158 "seek_data": false, 00:08:51.158 "copy": true, 00:08:51.158 "nvme_iov_md": false 00:08:51.158 }, 00:08:51.158 "memory_domains": [ 00:08:51.158 { 00:08:51.158 "dma_device_id": "system", 00:08:51.158 "dma_device_type": 1 00:08:51.158 }, 00:08:51.158 { 00:08:51.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.158 "dma_device_type": 2 00:08:51.158 } 00:08:51.158 ], 00:08:51.158 "driver_specific": {} 00:08:51.158 } 00:08:51.158 ] 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.158 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.158 "name": "Existed_Raid", 00:08:51.158 "uuid": "2058db1f-5ab1-491b-9f9e-9130e1ce02eb", 00:08:51.158 "strip_size_kb": 64, 00:08:51.158 "state": "online", 00:08:51.158 "raid_level": "raid0", 00:08:51.158 "superblock": true, 00:08:51.159 "num_base_bdevs": 3, 00:08:51.159 "num_base_bdevs_discovered": 3, 00:08:51.159 "num_base_bdevs_operational": 3, 00:08:51.159 "base_bdevs_list": [ 00:08:51.159 { 00:08:51.159 "name": "BaseBdev1", 00:08:51.159 "uuid": "aae669f7-5cd1-4486-8418-e39b43f1f48a", 00:08:51.159 "is_configured": true, 00:08:51.159 "data_offset": 2048, 00:08:51.159 "data_size": 63488 00:08:51.159 }, 00:08:51.159 { 00:08:51.159 "name": "BaseBdev2", 00:08:51.159 "uuid": "dfdfd370-5993-4120-816d-b5d9d16ceba2", 00:08:51.159 "is_configured": true, 00:08:51.159 "data_offset": 2048, 00:08:51.159 "data_size": 63488 00:08:51.159 }, 00:08:51.159 { 00:08:51.159 "name": "BaseBdev3", 00:08:51.159 "uuid": "cb5d9e58-b728-481e-b4fb-617b5cfb56fa", 00:08:51.159 "is_configured": true, 00:08:51.159 "data_offset": 2048, 00:08:51.159 "data_size": 63488 00:08:51.159 } 00:08:51.159 ] 00:08:51.159 }' 00:08:51.159 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.159 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.725 [2024-11-19 10:03:05.688108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.725 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.725 "name": "Existed_Raid", 00:08:51.725 "aliases": [ 00:08:51.725 "2058db1f-5ab1-491b-9f9e-9130e1ce02eb" 00:08:51.725 ], 00:08:51.725 "product_name": "Raid Volume", 00:08:51.725 "block_size": 512, 00:08:51.725 "num_blocks": 190464, 00:08:51.725 "uuid": "2058db1f-5ab1-491b-9f9e-9130e1ce02eb", 00:08:51.725 "assigned_rate_limits": { 00:08:51.725 "rw_ios_per_sec": 0, 00:08:51.725 "rw_mbytes_per_sec": 0, 00:08:51.725 "r_mbytes_per_sec": 0, 00:08:51.725 "w_mbytes_per_sec": 0 00:08:51.725 }, 00:08:51.725 "claimed": false, 00:08:51.725 "zoned": false, 00:08:51.725 "supported_io_types": { 00:08:51.725 "read": true, 00:08:51.725 "write": true, 00:08:51.725 "unmap": true, 00:08:51.725 "flush": true, 00:08:51.726 "reset": true, 00:08:51.726 "nvme_admin": false, 00:08:51.726 "nvme_io": false, 00:08:51.726 "nvme_io_md": false, 00:08:51.726 "write_zeroes": true, 00:08:51.726 "zcopy": false, 00:08:51.726 "get_zone_info": false, 00:08:51.726 "zone_management": false, 00:08:51.726 "zone_append": false, 00:08:51.726 "compare": false, 00:08:51.726 "compare_and_write": false, 00:08:51.726 "abort": false, 00:08:51.726 "seek_hole": false, 00:08:51.726 "seek_data": false, 00:08:51.726 "copy": false, 00:08:51.726 "nvme_iov_md": false 00:08:51.726 }, 00:08:51.726 "memory_domains": [ 00:08:51.726 { 00:08:51.726 "dma_device_id": "system", 00:08:51.726 "dma_device_type": 1 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.726 "dma_device_type": 2 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "dma_device_id": "system", 00:08:51.726 "dma_device_type": 1 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.726 "dma_device_type": 2 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "dma_device_id": "system", 00:08:51.726 "dma_device_type": 1 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.726 "dma_device_type": 2 00:08:51.726 } 00:08:51.726 ], 00:08:51.726 "driver_specific": { 00:08:51.726 "raid": { 00:08:51.726 "uuid": "2058db1f-5ab1-491b-9f9e-9130e1ce02eb", 00:08:51.726 "strip_size_kb": 64, 00:08:51.726 "state": "online", 00:08:51.726 "raid_level": "raid0", 00:08:51.726 "superblock": true, 00:08:51.726 "num_base_bdevs": 3, 00:08:51.726 "num_base_bdevs_discovered": 3, 00:08:51.726 "num_base_bdevs_operational": 3, 00:08:51.726 "base_bdevs_list": [ 00:08:51.726 { 00:08:51.726 "name": "BaseBdev1", 00:08:51.726 "uuid": "aae669f7-5cd1-4486-8418-e39b43f1f48a", 00:08:51.726 "is_configured": true, 00:08:51.726 "data_offset": 2048, 00:08:51.726 "data_size": 63488 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "name": "BaseBdev2", 00:08:51.726 "uuid": "dfdfd370-5993-4120-816d-b5d9d16ceba2", 00:08:51.726 "is_configured": true, 00:08:51.726 "data_offset": 2048, 00:08:51.726 "data_size": 63488 00:08:51.726 }, 00:08:51.726 { 00:08:51.726 "name": "BaseBdev3", 00:08:51.726 "uuid": "cb5d9e58-b728-481e-b4fb-617b5cfb56fa", 00:08:51.726 "is_configured": true, 00:08:51.726 "data_offset": 2048, 00:08:51.726 "data_size": 63488 00:08:51.726 } 00:08:51.726 ] 00:08:51.726 } 00:08:51.726 } 00:08:51.726 }' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:51.726 BaseBdev2 00:08:51.726 BaseBdev3' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.726 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.984 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.985 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.985 10:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.985 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.985 10:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.985 [2024-11-19 10:03:05.995864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.985 [2024-11-19 10:03:05.995907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.985 [2024-11-19 10:03:05.995991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.985 "name": "Existed_Raid", 00:08:51.985 "uuid": "2058db1f-5ab1-491b-9f9e-9130e1ce02eb", 00:08:51.985 "strip_size_kb": 64, 00:08:51.985 "state": "offline", 00:08:51.985 "raid_level": "raid0", 00:08:51.985 "superblock": true, 00:08:51.985 "num_base_bdevs": 3, 00:08:51.985 "num_base_bdevs_discovered": 2, 00:08:51.985 "num_base_bdevs_operational": 2, 00:08:51.985 "base_bdevs_list": [ 00:08:51.985 { 00:08:51.985 "name": null, 00:08:51.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.985 "is_configured": false, 00:08:51.985 "data_offset": 0, 00:08:51.985 "data_size": 63488 00:08:51.985 }, 00:08:51.985 { 00:08:51.985 "name": "BaseBdev2", 00:08:51.985 "uuid": "dfdfd370-5993-4120-816d-b5d9d16ceba2", 00:08:51.985 "is_configured": true, 00:08:51.985 "data_offset": 2048, 00:08:51.985 "data_size": 63488 00:08:51.985 }, 00:08:51.985 { 00:08:51.985 "name": "BaseBdev3", 00:08:51.985 "uuid": "cb5d9e58-b728-481e-b4fb-617b5cfb56fa", 00:08:51.985 "is_configured": true, 00:08:51.985 "data_offset": 2048, 00:08:51.985 "data_size": 63488 00:08:51.985 } 00:08:51.985 ] 00:08:51.985 }' 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.985 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.552 [2024-11-19 10:03:06.669430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.552 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.811 [2024-11-19 10:03:06.826694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.811 [2024-11-19 10:03:06.826776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.811 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.812 10:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.812 BaseBdev2 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.812 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.812 [ 00:08:52.812 { 00:08:52.812 "name": "BaseBdev2", 00:08:52.812 "aliases": [ 00:08:52.812 "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f" 00:08:52.812 ], 00:08:53.112 "product_name": "Malloc disk", 00:08:53.112 "block_size": 512, 00:08:53.112 "num_blocks": 65536, 00:08:53.112 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:53.112 "assigned_rate_limits": { 00:08:53.112 "rw_ios_per_sec": 0, 00:08:53.112 "rw_mbytes_per_sec": 0, 00:08:53.112 "r_mbytes_per_sec": 0, 00:08:53.112 "w_mbytes_per_sec": 0 00:08:53.112 }, 00:08:53.112 "claimed": false, 00:08:53.112 "zoned": false, 00:08:53.112 "supported_io_types": { 00:08:53.112 "read": true, 00:08:53.112 "write": true, 00:08:53.112 "unmap": true, 00:08:53.112 "flush": true, 00:08:53.112 "reset": true, 00:08:53.112 "nvme_admin": false, 00:08:53.112 "nvme_io": false, 00:08:53.112 "nvme_io_md": false, 00:08:53.112 "write_zeroes": true, 00:08:53.112 "zcopy": true, 00:08:53.112 "get_zone_info": false, 00:08:53.112 "zone_management": false, 00:08:53.112 "zone_append": false, 00:08:53.112 "compare": false, 00:08:53.112 "compare_and_write": false, 00:08:53.112 "abort": true, 00:08:53.112 "seek_hole": false, 00:08:53.112 "seek_data": false, 00:08:53.112 "copy": true, 00:08:53.112 "nvme_iov_md": false 00:08:53.112 }, 00:08:53.112 "memory_domains": [ 00:08:53.112 { 00:08:53.112 "dma_device_id": "system", 00:08:53.112 "dma_device_type": 1 00:08:53.112 }, 00:08:53.112 { 00:08:53.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.112 "dma_device_type": 2 00:08:53.112 } 00:08:53.112 ], 00:08:53.112 "driver_specific": {} 00:08:53.112 } 00:08:53.112 ] 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.112 BaseBdev3 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.112 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.112 [ 00:08:53.112 { 00:08:53.112 "name": "BaseBdev3", 00:08:53.112 "aliases": [ 00:08:53.112 "494e08f3-42a8-4319-997a-6f5b0e1eb8c4" 00:08:53.112 ], 00:08:53.112 "product_name": "Malloc disk", 00:08:53.112 "block_size": 512, 00:08:53.112 "num_blocks": 65536, 00:08:53.112 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:53.112 "assigned_rate_limits": { 00:08:53.112 "rw_ios_per_sec": 0, 00:08:53.112 "rw_mbytes_per_sec": 0, 00:08:53.112 "r_mbytes_per_sec": 0, 00:08:53.112 "w_mbytes_per_sec": 0 00:08:53.112 }, 00:08:53.112 "claimed": false, 00:08:53.112 "zoned": false, 00:08:53.112 "supported_io_types": { 00:08:53.112 "read": true, 00:08:53.112 "write": true, 00:08:53.112 "unmap": true, 00:08:53.112 "flush": true, 00:08:53.112 "reset": true, 00:08:53.112 "nvme_admin": false, 00:08:53.112 "nvme_io": false, 00:08:53.112 "nvme_io_md": false, 00:08:53.112 "write_zeroes": true, 00:08:53.112 "zcopy": true, 00:08:53.112 "get_zone_info": false, 00:08:53.112 "zone_management": false, 00:08:53.112 "zone_append": false, 00:08:53.112 "compare": false, 00:08:53.112 "compare_and_write": false, 00:08:53.112 "abort": true, 00:08:53.112 "seek_hole": false, 00:08:53.112 "seek_data": false, 00:08:53.112 "copy": true, 00:08:53.112 "nvme_iov_md": false 00:08:53.112 }, 00:08:53.112 "memory_domains": [ 00:08:53.112 { 00:08:53.113 "dma_device_id": "system", 00:08:53.113 "dma_device_type": 1 00:08:53.113 }, 00:08:53.113 { 00:08:53.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.113 "dma_device_type": 2 00:08:53.113 } 00:08:53.113 ], 00:08:53.113 "driver_specific": {} 00:08:53.113 } 00:08:53.113 ] 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.113 [2024-11-19 10:03:07.134986] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.113 [2024-11-19 10:03:07.135057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.113 [2024-11-19 10:03:07.135099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.113 [2024-11-19 10:03:07.137827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.113 "name": "Existed_Raid", 00:08:53.113 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:53.113 "strip_size_kb": 64, 00:08:53.113 "state": "configuring", 00:08:53.113 "raid_level": "raid0", 00:08:53.113 "superblock": true, 00:08:53.113 "num_base_bdevs": 3, 00:08:53.113 "num_base_bdevs_discovered": 2, 00:08:53.113 "num_base_bdevs_operational": 3, 00:08:53.113 "base_bdevs_list": [ 00:08:53.113 { 00:08:53.113 "name": "BaseBdev1", 00:08:53.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.113 "is_configured": false, 00:08:53.113 "data_offset": 0, 00:08:53.113 "data_size": 0 00:08:53.113 }, 00:08:53.113 { 00:08:53.113 "name": "BaseBdev2", 00:08:53.113 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:53.113 "is_configured": true, 00:08:53.113 "data_offset": 2048, 00:08:53.113 "data_size": 63488 00:08:53.113 }, 00:08:53.113 { 00:08:53.113 "name": "BaseBdev3", 00:08:53.113 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:53.113 "is_configured": true, 00:08:53.113 "data_offset": 2048, 00:08:53.113 "data_size": 63488 00:08:53.113 } 00:08:53.113 ] 00:08:53.113 }' 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.113 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.687 [2024-11-19 10:03:07.675113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.687 "name": "Existed_Raid", 00:08:53.687 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:53.687 "strip_size_kb": 64, 00:08:53.687 "state": "configuring", 00:08:53.687 "raid_level": "raid0", 00:08:53.687 "superblock": true, 00:08:53.687 "num_base_bdevs": 3, 00:08:53.687 "num_base_bdevs_discovered": 1, 00:08:53.687 "num_base_bdevs_operational": 3, 00:08:53.687 "base_bdevs_list": [ 00:08:53.687 { 00:08:53.687 "name": "BaseBdev1", 00:08:53.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.687 "is_configured": false, 00:08:53.687 "data_offset": 0, 00:08:53.687 "data_size": 0 00:08:53.687 }, 00:08:53.687 { 00:08:53.687 "name": null, 00:08:53.687 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:53.687 "is_configured": false, 00:08:53.687 "data_offset": 0, 00:08:53.687 "data_size": 63488 00:08:53.687 }, 00:08:53.687 { 00:08:53.687 "name": "BaseBdev3", 00:08:53.687 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:53.687 "is_configured": true, 00:08:53.687 "data_offset": 2048, 00:08:53.687 "data_size": 63488 00:08:53.687 } 00:08:53.687 ] 00:08:53.687 }' 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.687 10:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.254 [2024-11-19 10:03:08.329599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.254 BaseBdev1 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.254 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.255 [ 00:08:54.255 { 00:08:54.255 "name": "BaseBdev1", 00:08:54.255 "aliases": [ 00:08:54.255 "2c75c2f2-bdcc-40c5-9d4e-b40291723722" 00:08:54.255 ], 00:08:54.255 "product_name": "Malloc disk", 00:08:54.255 "block_size": 512, 00:08:54.255 "num_blocks": 65536, 00:08:54.255 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:54.255 "assigned_rate_limits": { 00:08:54.255 "rw_ios_per_sec": 0, 00:08:54.255 "rw_mbytes_per_sec": 0, 00:08:54.255 "r_mbytes_per_sec": 0, 00:08:54.255 "w_mbytes_per_sec": 0 00:08:54.255 }, 00:08:54.255 "claimed": true, 00:08:54.255 "claim_type": "exclusive_write", 00:08:54.255 "zoned": false, 00:08:54.255 "supported_io_types": { 00:08:54.255 "read": true, 00:08:54.255 "write": true, 00:08:54.255 "unmap": true, 00:08:54.255 "flush": true, 00:08:54.255 "reset": true, 00:08:54.255 "nvme_admin": false, 00:08:54.255 "nvme_io": false, 00:08:54.255 "nvme_io_md": false, 00:08:54.255 "write_zeroes": true, 00:08:54.255 "zcopy": true, 00:08:54.255 "get_zone_info": false, 00:08:54.255 "zone_management": false, 00:08:54.255 "zone_append": false, 00:08:54.255 "compare": false, 00:08:54.255 "compare_and_write": false, 00:08:54.255 "abort": true, 00:08:54.255 "seek_hole": false, 00:08:54.255 "seek_data": false, 00:08:54.255 "copy": true, 00:08:54.255 "nvme_iov_md": false 00:08:54.255 }, 00:08:54.255 "memory_domains": [ 00:08:54.255 { 00:08:54.255 "dma_device_id": "system", 00:08:54.255 "dma_device_type": 1 00:08:54.255 }, 00:08:54.255 { 00:08:54.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.255 "dma_device_type": 2 00:08:54.255 } 00:08:54.255 ], 00:08:54.255 "driver_specific": {} 00:08:54.255 } 00:08:54.255 ] 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.255 "name": "Existed_Raid", 00:08:54.255 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:54.255 "strip_size_kb": 64, 00:08:54.255 "state": "configuring", 00:08:54.255 "raid_level": "raid0", 00:08:54.255 "superblock": true, 00:08:54.255 "num_base_bdevs": 3, 00:08:54.255 "num_base_bdevs_discovered": 2, 00:08:54.255 "num_base_bdevs_operational": 3, 00:08:54.255 "base_bdevs_list": [ 00:08:54.255 { 00:08:54.255 "name": "BaseBdev1", 00:08:54.255 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:54.255 "is_configured": true, 00:08:54.255 "data_offset": 2048, 00:08:54.255 "data_size": 63488 00:08:54.255 }, 00:08:54.255 { 00:08:54.255 "name": null, 00:08:54.255 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:54.255 "is_configured": false, 00:08:54.255 "data_offset": 0, 00:08:54.255 "data_size": 63488 00:08:54.255 }, 00:08:54.255 { 00:08:54.255 "name": "BaseBdev3", 00:08:54.255 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:54.255 "is_configured": true, 00:08:54.255 "data_offset": 2048, 00:08:54.255 "data_size": 63488 00:08:54.255 } 00:08:54.255 ] 00:08:54.255 }' 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.255 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.822 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.822 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.822 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 [2024-11-19 10:03:08.933852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.823 "name": "Existed_Raid", 00:08:54.823 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:54.823 "strip_size_kb": 64, 00:08:54.823 "state": "configuring", 00:08:54.823 "raid_level": "raid0", 00:08:54.823 "superblock": true, 00:08:54.823 "num_base_bdevs": 3, 00:08:54.823 "num_base_bdevs_discovered": 1, 00:08:54.823 "num_base_bdevs_operational": 3, 00:08:54.823 "base_bdevs_list": [ 00:08:54.823 { 00:08:54.823 "name": "BaseBdev1", 00:08:54.823 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:54.823 "is_configured": true, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "name": null, 00:08:54.823 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:54.823 "is_configured": false, 00:08:54.823 "data_offset": 0, 00:08:54.823 "data_size": 63488 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "name": null, 00:08:54.823 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:54.823 "is_configured": false, 00:08:54.823 "data_offset": 0, 00:08:54.823 "data_size": 63488 00:08:54.823 } 00:08:54.823 ] 00:08:54.823 }' 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.823 10:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 [2024-11-19 10:03:09.534064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.389 "name": "Existed_Raid", 00:08:55.389 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:55.389 "strip_size_kb": 64, 00:08:55.389 "state": "configuring", 00:08:55.389 "raid_level": "raid0", 00:08:55.389 "superblock": true, 00:08:55.389 "num_base_bdevs": 3, 00:08:55.389 "num_base_bdevs_discovered": 2, 00:08:55.389 "num_base_bdevs_operational": 3, 00:08:55.389 "base_bdevs_list": [ 00:08:55.389 { 00:08:55.389 "name": "BaseBdev1", 00:08:55.389 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:55.389 "is_configured": true, 00:08:55.389 "data_offset": 2048, 00:08:55.389 "data_size": 63488 00:08:55.389 }, 00:08:55.389 { 00:08:55.389 "name": null, 00:08:55.389 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:55.389 "is_configured": false, 00:08:55.389 "data_offset": 0, 00:08:55.389 "data_size": 63488 00:08:55.389 }, 00:08:55.389 { 00:08:55.389 "name": "BaseBdev3", 00:08:55.389 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:55.389 "is_configured": true, 00:08:55.389 "data_offset": 2048, 00:08:55.389 "data_size": 63488 00:08:55.389 } 00:08:55.389 ] 00:08:55.389 }' 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.389 10:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.956 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.956 [2024-11-19 10:03:10.158255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.215 "name": "Existed_Raid", 00:08:56.215 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:56.215 "strip_size_kb": 64, 00:08:56.215 "state": "configuring", 00:08:56.215 "raid_level": "raid0", 00:08:56.215 "superblock": true, 00:08:56.215 "num_base_bdevs": 3, 00:08:56.215 "num_base_bdevs_discovered": 1, 00:08:56.215 "num_base_bdevs_operational": 3, 00:08:56.215 "base_bdevs_list": [ 00:08:56.215 { 00:08:56.215 "name": null, 00:08:56.215 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:56.215 "is_configured": false, 00:08:56.215 "data_offset": 0, 00:08:56.215 "data_size": 63488 00:08:56.215 }, 00:08:56.215 { 00:08:56.215 "name": null, 00:08:56.215 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:56.215 "is_configured": false, 00:08:56.215 "data_offset": 0, 00:08:56.215 "data_size": 63488 00:08:56.215 }, 00:08:56.215 { 00:08:56.215 "name": "BaseBdev3", 00:08:56.215 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:56.215 "is_configured": true, 00:08:56.215 "data_offset": 2048, 00:08:56.215 "data_size": 63488 00:08:56.215 } 00:08:56.215 ] 00:08:56.215 }' 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.215 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.782 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.782 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.782 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.782 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.782 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 [2024-11-19 10:03:10.855794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.783 "name": "Existed_Raid", 00:08:56.783 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:56.783 "strip_size_kb": 64, 00:08:56.783 "state": "configuring", 00:08:56.783 "raid_level": "raid0", 00:08:56.783 "superblock": true, 00:08:56.783 "num_base_bdevs": 3, 00:08:56.783 "num_base_bdevs_discovered": 2, 00:08:56.783 "num_base_bdevs_operational": 3, 00:08:56.783 "base_bdevs_list": [ 00:08:56.783 { 00:08:56.783 "name": null, 00:08:56.783 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:56.783 "is_configured": false, 00:08:56.783 "data_offset": 0, 00:08:56.783 "data_size": 63488 00:08:56.783 }, 00:08:56.783 { 00:08:56.783 "name": "BaseBdev2", 00:08:56.783 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:56.783 "is_configured": true, 00:08:56.783 "data_offset": 2048, 00:08:56.783 "data_size": 63488 00:08:56.783 }, 00:08:56.783 { 00:08:56.783 "name": "BaseBdev3", 00:08:56.783 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:56.783 "is_configured": true, 00:08:56.783 "data_offset": 2048, 00:08:56.783 "data_size": 63488 00:08:56.783 } 00:08:56.783 ] 00:08:56.783 }' 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.783 10:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c75c2f2-bdcc-40c5-9d4e-b40291723722 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.349 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.349 [2024-11-19 10:03:11.518188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:57.350 [2024-11-19 10:03:11.518552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:57.350 [2024-11-19 10:03:11.518578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.350 NewBaseBdev 00:08:57.350 [2024-11-19 10:03:11.518929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:57.350 [2024-11-19 10:03:11.519142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:57.350 [2024-11-19 10:03:11.519160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:57.350 [2024-11-19 10:03:11.519347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 [ 00:08:57.350 { 00:08:57.350 "name": "NewBaseBdev", 00:08:57.350 "aliases": [ 00:08:57.350 "2c75c2f2-bdcc-40c5-9d4e-b40291723722" 00:08:57.350 ], 00:08:57.350 "product_name": "Malloc disk", 00:08:57.350 "block_size": 512, 00:08:57.350 "num_blocks": 65536, 00:08:57.350 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:57.350 "assigned_rate_limits": { 00:08:57.350 "rw_ios_per_sec": 0, 00:08:57.350 "rw_mbytes_per_sec": 0, 00:08:57.350 "r_mbytes_per_sec": 0, 00:08:57.350 "w_mbytes_per_sec": 0 00:08:57.350 }, 00:08:57.350 "claimed": true, 00:08:57.350 "claim_type": "exclusive_write", 00:08:57.350 "zoned": false, 00:08:57.350 "supported_io_types": { 00:08:57.350 "read": true, 00:08:57.350 "write": true, 00:08:57.350 "unmap": true, 00:08:57.350 "flush": true, 00:08:57.350 "reset": true, 00:08:57.350 "nvme_admin": false, 00:08:57.350 "nvme_io": false, 00:08:57.350 "nvme_io_md": false, 00:08:57.350 "write_zeroes": true, 00:08:57.350 "zcopy": true, 00:08:57.350 "get_zone_info": false, 00:08:57.350 "zone_management": false, 00:08:57.350 "zone_append": false, 00:08:57.350 "compare": false, 00:08:57.350 "compare_and_write": false, 00:08:57.350 "abort": true, 00:08:57.350 "seek_hole": false, 00:08:57.350 "seek_data": false, 00:08:57.350 "copy": true, 00:08:57.350 "nvme_iov_md": false 00:08:57.350 }, 00:08:57.350 "memory_domains": [ 00:08:57.350 { 00:08:57.350 "dma_device_id": "system", 00:08:57.350 "dma_device_type": 1 00:08:57.350 }, 00:08:57.350 { 00:08:57.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.350 "dma_device_type": 2 00:08:57.350 } 00:08:57.350 ], 00:08:57.350 "driver_specific": {} 00:08:57.350 } 00:08:57.350 ] 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.350 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.608 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.608 "name": "Existed_Raid", 00:08:57.608 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:57.608 "strip_size_kb": 64, 00:08:57.608 "state": "online", 00:08:57.608 "raid_level": "raid0", 00:08:57.608 "superblock": true, 00:08:57.608 "num_base_bdevs": 3, 00:08:57.608 "num_base_bdevs_discovered": 3, 00:08:57.608 "num_base_bdevs_operational": 3, 00:08:57.608 "base_bdevs_list": [ 00:08:57.608 { 00:08:57.608 "name": "NewBaseBdev", 00:08:57.608 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:57.608 "is_configured": true, 00:08:57.608 "data_offset": 2048, 00:08:57.608 "data_size": 63488 00:08:57.608 }, 00:08:57.608 { 00:08:57.608 "name": "BaseBdev2", 00:08:57.608 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:57.608 "is_configured": true, 00:08:57.608 "data_offset": 2048, 00:08:57.608 "data_size": 63488 00:08:57.608 }, 00:08:57.608 { 00:08:57.608 "name": "BaseBdev3", 00:08:57.608 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:57.608 "is_configured": true, 00:08:57.608 "data_offset": 2048, 00:08:57.608 "data_size": 63488 00:08:57.608 } 00:08:57.608 ] 00:08:57.608 }' 00:08:57.608 10:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.608 10:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.175 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 [2024-11-19 10:03:12.118814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.176 "name": "Existed_Raid", 00:08:58.176 "aliases": [ 00:08:58.176 "edaa07c5-9efa-4b46-8ca0-3f32aab55841" 00:08:58.176 ], 00:08:58.176 "product_name": "Raid Volume", 00:08:58.176 "block_size": 512, 00:08:58.176 "num_blocks": 190464, 00:08:58.176 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:58.176 "assigned_rate_limits": { 00:08:58.176 "rw_ios_per_sec": 0, 00:08:58.176 "rw_mbytes_per_sec": 0, 00:08:58.176 "r_mbytes_per_sec": 0, 00:08:58.176 "w_mbytes_per_sec": 0 00:08:58.176 }, 00:08:58.176 "claimed": false, 00:08:58.176 "zoned": false, 00:08:58.176 "supported_io_types": { 00:08:58.176 "read": true, 00:08:58.176 "write": true, 00:08:58.176 "unmap": true, 00:08:58.176 "flush": true, 00:08:58.176 "reset": true, 00:08:58.176 "nvme_admin": false, 00:08:58.176 "nvme_io": false, 00:08:58.176 "nvme_io_md": false, 00:08:58.176 "write_zeroes": true, 00:08:58.176 "zcopy": false, 00:08:58.176 "get_zone_info": false, 00:08:58.176 "zone_management": false, 00:08:58.176 "zone_append": false, 00:08:58.176 "compare": false, 00:08:58.176 "compare_and_write": false, 00:08:58.176 "abort": false, 00:08:58.176 "seek_hole": false, 00:08:58.176 "seek_data": false, 00:08:58.176 "copy": false, 00:08:58.176 "nvme_iov_md": false 00:08:58.176 }, 00:08:58.176 "memory_domains": [ 00:08:58.176 { 00:08:58.176 "dma_device_id": "system", 00:08:58.176 "dma_device_type": 1 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.176 "dma_device_type": 2 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "dma_device_id": "system", 00:08:58.176 "dma_device_type": 1 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.176 "dma_device_type": 2 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "dma_device_id": "system", 00:08:58.176 "dma_device_type": 1 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.176 "dma_device_type": 2 00:08:58.176 } 00:08:58.176 ], 00:08:58.176 "driver_specific": { 00:08:58.176 "raid": { 00:08:58.176 "uuid": "edaa07c5-9efa-4b46-8ca0-3f32aab55841", 00:08:58.176 "strip_size_kb": 64, 00:08:58.176 "state": "online", 00:08:58.176 "raid_level": "raid0", 00:08:58.176 "superblock": true, 00:08:58.176 "num_base_bdevs": 3, 00:08:58.176 "num_base_bdevs_discovered": 3, 00:08:58.176 "num_base_bdevs_operational": 3, 00:08:58.176 "base_bdevs_list": [ 00:08:58.176 { 00:08:58.176 "name": "NewBaseBdev", 00:08:58.176 "uuid": "2c75c2f2-bdcc-40c5-9d4e-b40291723722", 00:08:58.176 "is_configured": true, 00:08:58.176 "data_offset": 2048, 00:08:58.176 "data_size": 63488 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "name": "BaseBdev2", 00:08:58.176 "uuid": "e7f3d3e5-ddad-4f29-a607-85f4504b2a1f", 00:08:58.176 "is_configured": true, 00:08:58.176 "data_offset": 2048, 00:08:58.176 "data_size": 63488 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "name": "BaseBdev3", 00:08:58.176 "uuid": "494e08f3-42a8-4319-997a-6f5b0e1eb8c4", 00:08:58.176 "is_configured": true, 00:08:58.176 "data_offset": 2048, 00:08:58.176 "data_size": 63488 00:08:58.176 } 00:08:58.176 ] 00:08:58.176 } 00:08:58.176 } 00:08:58.176 }' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:58.176 BaseBdev2 00:08:58.176 BaseBdev3' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.435 [2024-11-19 10:03:12.434485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.435 [2024-11-19 10:03:12.434526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.435 [2024-11-19 10:03:12.434657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.435 [2024-11-19 10:03:12.434741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.435 [2024-11-19 10:03:12.434764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64297 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64297 ']' 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64297 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64297 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64297' 00:08:58.435 killing process with pid 64297 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64297 00:08:58.435 [2024-11-19 10:03:12.472824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.435 10:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64297 00:08:58.693 [2024-11-19 10:03:12.765414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.069 10:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:00.069 00:09:00.069 real 0m12.114s 00:09:00.069 user 0m19.975s 00:09:00.069 sys 0m1.725s 00:09:00.069 10:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.069 10:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.069 ************************************ 00:09:00.069 END TEST raid_state_function_test_sb 00:09:00.069 ************************************ 00:09:00.069 10:03:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:00.069 10:03:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:00.069 10:03:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.069 10:03:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.069 ************************************ 00:09:00.069 START TEST raid_superblock_test 00:09:00.069 ************************************ 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64928 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64928 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64928 ']' 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.069 10:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.069 [2024-11-19 10:03:14.076504] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:00.069 [2024-11-19 10:03:14.077091] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64928 ] 00:09:00.069 [2024-11-19 10:03:14.263069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.328 [2024-11-19 10:03:14.407680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.586 [2024-11-19 10:03:14.629810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.586 [2024-11-19 10:03:14.630138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.844 10:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.845 10:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.845 malloc1 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.845 [2024-11-19 10:03:15.052332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.845 [2024-11-19 10:03:15.052578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.845 [2024-11-19 10:03:15.052765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:00.845 [2024-11-19 10:03:15.052922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.845 [2024-11-19 10:03:15.056111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.845 [2024-11-19 10:03:15.056158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.845 pt1 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.845 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 malloc2 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 [2024-11-19 10:03:15.119261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.105 [2024-11-19 10:03:15.119490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.105 [2024-11-19 10:03:15.119538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:01.105 [2024-11-19 10:03:15.119555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.105 [2024-11-19 10:03:15.122652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.105 [2024-11-19 10:03:15.122823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.105 pt2 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 malloc3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 [2024-11-19 10:03:15.190767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:01.105 [2024-11-19 10:03:15.190872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.105 [2024-11-19 10:03:15.190913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:01.105 [2024-11-19 10:03:15.190929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.105 [2024-11-19 10:03:15.194649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.105 [2024-11-19 10:03:15.194705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:01.105 pt3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 [2024-11-19 10:03:15.199003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:01.105 [2024-11-19 10:03:15.201809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.105 [2024-11-19 10:03:15.202038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:01.105 [2024-11-19 10:03:15.202297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:01.105 [2024-11-19 10:03:15.202320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.105 [2024-11-19 10:03:15.202694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:01.105 [2024-11-19 10:03:15.202954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:01.105 [2024-11-19 10:03:15.202972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:01.105 [2024-11-19 10:03:15.203241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.105 "name": "raid_bdev1", 00:09:01.105 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:01.105 "strip_size_kb": 64, 00:09:01.105 "state": "online", 00:09:01.105 "raid_level": "raid0", 00:09:01.105 "superblock": true, 00:09:01.105 "num_base_bdevs": 3, 00:09:01.105 "num_base_bdevs_discovered": 3, 00:09:01.105 "num_base_bdevs_operational": 3, 00:09:01.105 "base_bdevs_list": [ 00:09:01.105 { 00:09:01.105 "name": "pt1", 00:09:01.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.105 "is_configured": true, 00:09:01.105 "data_offset": 2048, 00:09:01.105 "data_size": 63488 00:09:01.105 }, 00:09:01.105 { 00:09:01.105 "name": "pt2", 00:09:01.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.105 "is_configured": true, 00:09:01.105 "data_offset": 2048, 00:09:01.105 "data_size": 63488 00:09:01.105 }, 00:09:01.105 { 00:09:01.105 "name": "pt3", 00:09:01.105 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.105 "is_configured": true, 00:09:01.105 "data_offset": 2048, 00:09:01.105 "data_size": 63488 00:09:01.105 } 00:09:01.105 ] 00:09:01.105 }' 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.105 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 [2024-11-19 10:03:15.735811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.674 "name": "raid_bdev1", 00:09:01.674 "aliases": [ 00:09:01.674 "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7" 00:09:01.674 ], 00:09:01.674 "product_name": "Raid Volume", 00:09:01.674 "block_size": 512, 00:09:01.674 "num_blocks": 190464, 00:09:01.674 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:01.674 "assigned_rate_limits": { 00:09:01.674 "rw_ios_per_sec": 0, 00:09:01.674 "rw_mbytes_per_sec": 0, 00:09:01.674 "r_mbytes_per_sec": 0, 00:09:01.674 "w_mbytes_per_sec": 0 00:09:01.674 }, 00:09:01.674 "claimed": false, 00:09:01.674 "zoned": false, 00:09:01.674 "supported_io_types": { 00:09:01.674 "read": true, 00:09:01.674 "write": true, 00:09:01.674 "unmap": true, 00:09:01.674 "flush": true, 00:09:01.674 "reset": true, 00:09:01.674 "nvme_admin": false, 00:09:01.674 "nvme_io": false, 00:09:01.674 "nvme_io_md": false, 00:09:01.674 "write_zeroes": true, 00:09:01.674 "zcopy": false, 00:09:01.674 "get_zone_info": false, 00:09:01.674 "zone_management": false, 00:09:01.674 "zone_append": false, 00:09:01.674 "compare": false, 00:09:01.674 "compare_and_write": false, 00:09:01.674 "abort": false, 00:09:01.674 "seek_hole": false, 00:09:01.674 "seek_data": false, 00:09:01.674 "copy": false, 00:09:01.674 "nvme_iov_md": false 00:09:01.674 }, 00:09:01.674 "memory_domains": [ 00:09:01.674 { 00:09:01.674 "dma_device_id": "system", 00:09:01.674 "dma_device_type": 1 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.674 "dma_device_type": 2 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "dma_device_id": "system", 00:09:01.674 "dma_device_type": 1 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.674 "dma_device_type": 2 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "dma_device_id": "system", 00:09:01.674 "dma_device_type": 1 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.674 "dma_device_type": 2 00:09:01.674 } 00:09:01.674 ], 00:09:01.674 "driver_specific": { 00:09:01.674 "raid": { 00:09:01.674 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:01.674 "strip_size_kb": 64, 00:09:01.674 "state": "online", 00:09:01.674 "raid_level": "raid0", 00:09:01.674 "superblock": true, 00:09:01.674 "num_base_bdevs": 3, 00:09:01.674 "num_base_bdevs_discovered": 3, 00:09:01.674 "num_base_bdevs_operational": 3, 00:09:01.674 "base_bdevs_list": [ 00:09:01.674 { 00:09:01.674 "name": "pt1", 00:09:01.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.674 "is_configured": true, 00:09:01.674 "data_offset": 2048, 00:09:01.674 "data_size": 63488 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "name": "pt2", 00:09:01.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.674 "is_configured": true, 00:09:01.674 "data_offset": 2048, 00:09:01.674 "data_size": 63488 00:09:01.674 }, 00:09:01.674 { 00:09:01.674 "name": "pt3", 00:09:01.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.674 "is_configured": true, 00:09:01.674 "data_offset": 2048, 00:09:01.674 "data_size": 63488 00:09:01.674 } 00:09:01.674 ] 00:09:01.674 } 00:09:01.674 } 00:09:01.674 }' 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.674 pt2 00:09:01.674 pt3' 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.674 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.675 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.675 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.675 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.675 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.933 10:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 [2024-11-19 10:03:16.071836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c23d5660-d2cb-4a42-ac98-c548b4dfb1f7 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c23d5660-d2cb-4a42-ac98-c548b4dfb1f7 ']' 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 [2024-11-19 10:03:16.123455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.933 [2024-11-19 10:03:16.123668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.933 [2024-11-19 10:03:16.123838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.933 [2024-11-19 10:03:16.123937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.933 [2024-11-19 10:03:16.123956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.933 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.934 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.934 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:01.934 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:02.193 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.194 [2024-11-19 10:03:16.279584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:02.194 [2024-11-19 10:03:16.282417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:02.194 [2024-11-19 10:03:16.282624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:02.194 [2024-11-19 10:03:16.282719] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:02.194 [2024-11-19 10:03:16.282828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:02.194 [2024-11-19 10:03:16.282868] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:02.194 [2024-11-19 10:03:16.282898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.194 [2024-11-19 10:03:16.282916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:02.194 request: 00:09:02.194 { 00:09:02.194 "name": "raid_bdev1", 00:09:02.194 "raid_level": "raid0", 00:09:02.194 "base_bdevs": [ 00:09:02.194 "malloc1", 00:09:02.194 "malloc2", 00:09:02.194 "malloc3" 00:09:02.194 ], 00:09:02.194 "strip_size_kb": 64, 00:09:02.194 "superblock": false, 00:09:02.194 "method": "bdev_raid_create", 00:09:02.194 "req_id": 1 00:09:02.194 } 00:09:02.194 Got JSON-RPC error response 00:09:02.194 response: 00:09:02.194 { 00:09:02.194 "code": -17, 00:09:02.194 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:02.194 } 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.194 [2024-11-19 10:03:16.347522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:02.194 [2024-11-19 10:03:16.347763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.194 [2024-11-19 10:03:16.347940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:02.194 [2024-11-19 10:03:16.348056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.194 [2024-11-19 10:03:16.351265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.194 [2024-11-19 10:03:16.351425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:02.194 [2024-11-19 10:03:16.351666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:02.194 [2024-11-19 10:03:16.351879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:02.194 pt1 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.194 "name": "raid_bdev1", 00:09:02.194 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:02.194 "strip_size_kb": 64, 00:09:02.194 "state": "configuring", 00:09:02.194 "raid_level": "raid0", 00:09:02.194 "superblock": true, 00:09:02.194 "num_base_bdevs": 3, 00:09:02.194 "num_base_bdevs_discovered": 1, 00:09:02.194 "num_base_bdevs_operational": 3, 00:09:02.194 "base_bdevs_list": [ 00:09:02.194 { 00:09:02.194 "name": "pt1", 00:09:02.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.194 "is_configured": true, 00:09:02.194 "data_offset": 2048, 00:09:02.194 "data_size": 63488 00:09:02.194 }, 00:09:02.194 { 00:09:02.194 "name": null, 00:09:02.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.194 "is_configured": false, 00:09:02.194 "data_offset": 2048, 00:09:02.194 "data_size": 63488 00:09:02.194 }, 00:09:02.194 { 00:09:02.194 "name": null, 00:09:02.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.194 "is_configured": false, 00:09:02.194 "data_offset": 2048, 00:09:02.194 "data_size": 63488 00:09:02.194 } 00:09:02.194 ] 00:09:02.194 }' 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.194 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.761 [2024-11-19 10:03:16.920006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.761 [2024-11-19 10:03:16.920286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.761 [2024-11-19 10:03:16.920340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:02.761 [2024-11-19 10:03:16.920358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.761 [2024-11-19 10:03:16.921001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.761 [2024-11-19 10:03:16.921045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.761 [2024-11-19 10:03:16.921173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:02.761 [2024-11-19 10:03:16.921209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.761 pt2 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.761 [2024-11-19 10:03:16.927950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.761 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.761 "name": "raid_bdev1", 00:09:02.761 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:02.761 "strip_size_kb": 64, 00:09:02.762 "state": "configuring", 00:09:02.762 "raid_level": "raid0", 00:09:02.762 "superblock": true, 00:09:02.762 "num_base_bdevs": 3, 00:09:02.762 "num_base_bdevs_discovered": 1, 00:09:02.762 "num_base_bdevs_operational": 3, 00:09:02.762 "base_bdevs_list": [ 00:09:02.762 { 00:09:02.762 "name": "pt1", 00:09:02.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.762 "is_configured": true, 00:09:02.762 "data_offset": 2048, 00:09:02.762 "data_size": 63488 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "name": null, 00:09:02.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.762 "is_configured": false, 00:09:02.762 "data_offset": 0, 00:09:02.762 "data_size": 63488 00:09:02.762 }, 00:09:02.762 { 00:09:02.762 "name": null, 00:09:02.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.762 "is_configured": false, 00:09:02.762 "data_offset": 2048, 00:09:02.762 "data_size": 63488 00:09:02.762 } 00:09:02.762 ] 00:09:02.762 }' 00:09:02.762 10:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.762 10:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.329 [2024-11-19 10:03:17.448573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.329 [2024-11-19 10:03:17.448858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.329 [2024-11-19 10:03:17.448901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:03.329 [2024-11-19 10:03:17.448922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.329 [2024-11-19 10:03:17.449592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.329 [2024-11-19 10:03:17.449625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.329 [2024-11-19 10:03:17.449742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:03.329 [2024-11-19 10:03:17.449799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.329 pt2 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.329 [2024-11-19 10:03:17.456532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:03.329 [2024-11-19 10:03:17.456737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.329 [2024-11-19 10:03:17.456905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:03.329 [2024-11-19 10:03:17.457034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.329 [2024-11-19 10:03:17.457715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.329 [2024-11-19 10:03:17.457904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:03.329 [2024-11-19 10:03:17.458131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:03.329 [2024-11-19 10:03:17.458288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:03.329 [2024-11-19 10:03:17.458568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.329 [2024-11-19 10:03:17.458706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.329 [2024-11-19 10:03:17.459116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:03.329 pt3 00:09:03.329 [2024-11-19 10:03:17.459445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.329 [2024-11-19 10:03:17.459470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:03.329 [2024-11-19 10:03:17.459656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.329 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.330 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.330 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.330 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.330 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.330 "name": "raid_bdev1", 00:09:03.330 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:03.330 "strip_size_kb": 64, 00:09:03.330 "state": "online", 00:09:03.330 "raid_level": "raid0", 00:09:03.330 "superblock": true, 00:09:03.330 "num_base_bdevs": 3, 00:09:03.330 "num_base_bdevs_discovered": 3, 00:09:03.330 "num_base_bdevs_operational": 3, 00:09:03.330 "base_bdevs_list": [ 00:09:03.330 { 00:09:03.330 "name": "pt1", 00:09:03.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.330 "is_configured": true, 00:09:03.330 "data_offset": 2048, 00:09:03.330 "data_size": 63488 00:09:03.330 }, 00:09:03.330 { 00:09:03.330 "name": "pt2", 00:09:03.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.330 "is_configured": true, 00:09:03.330 "data_offset": 2048, 00:09:03.330 "data_size": 63488 00:09:03.330 }, 00:09:03.330 { 00:09:03.330 "name": "pt3", 00:09:03.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.330 "is_configured": true, 00:09:03.330 "data_offset": 2048, 00:09:03.330 "data_size": 63488 00:09:03.330 } 00:09:03.330 ] 00:09:03.330 }' 00:09:03.330 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.330 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.897 10:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.897 [2024-11-19 10:03:17.985133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.897 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.897 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.897 "name": "raid_bdev1", 00:09:03.897 "aliases": [ 00:09:03.897 "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7" 00:09:03.897 ], 00:09:03.897 "product_name": "Raid Volume", 00:09:03.897 "block_size": 512, 00:09:03.897 "num_blocks": 190464, 00:09:03.897 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:03.897 "assigned_rate_limits": { 00:09:03.897 "rw_ios_per_sec": 0, 00:09:03.897 "rw_mbytes_per_sec": 0, 00:09:03.897 "r_mbytes_per_sec": 0, 00:09:03.897 "w_mbytes_per_sec": 0 00:09:03.897 }, 00:09:03.897 "claimed": false, 00:09:03.897 "zoned": false, 00:09:03.897 "supported_io_types": { 00:09:03.897 "read": true, 00:09:03.897 "write": true, 00:09:03.897 "unmap": true, 00:09:03.897 "flush": true, 00:09:03.897 "reset": true, 00:09:03.897 "nvme_admin": false, 00:09:03.897 "nvme_io": false, 00:09:03.897 "nvme_io_md": false, 00:09:03.897 "write_zeroes": true, 00:09:03.897 "zcopy": false, 00:09:03.897 "get_zone_info": false, 00:09:03.897 "zone_management": false, 00:09:03.897 "zone_append": false, 00:09:03.897 "compare": false, 00:09:03.897 "compare_and_write": false, 00:09:03.897 "abort": false, 00:09:03.897 "seek_hole": false, 00:09:03.897 "seek_data": false, 00:09:03.897 "copy": false, 00:09:03.897 "nvme_iov_md": false 00:09:03.897 }, 00:09:03.897 "memory_domains": [ 00:09:03.897 { 00:09:03.897 "dma_device_id": "system", 00:09:03.897 "dma_device_type": 1 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.897 "dma_device_type": 2 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "dma_device_id": "system", 00:09:03.897 "dma_device_type": 1 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.897 "dma_device_type": 2 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "dma_device_id": "system", 00:09:03.897 "dma_device_type": 1 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.897 "dma_device_type": 2 00:09:03.897 } 00:09:03.897 ], 00:09:03.897 "driver_specific": { 00:09:03.897 "raid": { 00:09:03.897 "uuid": "c23d5660-d2cb-4a42-ac98-c548b4dfb1f7", 00:09:03.897 "strip_size_kb": 64, 00:09:03.897 "state": "online", 00:09:03.897 "raid_level": "raid0", 00:09:03.897 "superblock": true, 00:09:03.897 "num_base_bdevs": 3, 00:09:03.897 "num_base_bdevs_discovered": 3, 00:09:03.897 "num_base_bdevs_operational": 3, 00:09:03.897 "base_bdevs_list": [ 00:09:03.897 { 00:09:03.897 "name": "pt1", 00:09:03.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.897 "is_configured": true, 00:09:03.897 "data_offset": 2048, 00:09:03.897 "data_size": 63488 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "name": "pt2", 00:09:03.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.897 "is_configured": true, 00:09:03.897 "data_offset": 2048, 00:09:03.897 "data_size": 63488 00:09:03.897 }, 00:09:03.897 { 00:09:03.897 "name": "pt3", 00:09:03.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.897 "is_configured": true, 00:09:03.897 "data_offset": 2048, 00:09:03.897 "data_size": 63488 00:09:03.897 } 00:09:03.897 ] 00:09:03.897 } 00:09:03.897 } 00:09:03.897 }' 00:09:03.897 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.897 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:03.897 pt2 00:09:03.897 pt3' 00:09:03.897 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.156 [2024-11-19 10:03:18.305181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c23d5660-d2cb-4a42-ac98-c548b4dfb1f7 '!=' c23d5660-d2cb-4a42-ac98-c548b4dfb1f7 ']' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64928 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64928 ']' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64928 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64928 00:09:04.156 killing process with pid 64928 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64928' 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64928 00:09:04.156 [2024-11-19 10:03:18.382506] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.156 10:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64928 00:09:04.156 [2024-11-19 10:03:18.382660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.156 [2024-11-19 10:03:18.382753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.156 [2024-11-19 10:03:18.382774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:04.723 [2024-11-19 10:03:18.678053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.658 10:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:05.658 00:09:05.658 real 0m5.909s 00:09:05.658 user 0m8.730s 00:09:05.658 sys 0m0.936s 00:09:05.658 10:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.658 ************************************ 00:09:05.658 END TEST raid_superblock_test 00:09:05.658 ************************************ 00:09:05.658 10:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.917 10:03:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:05.917 10:03:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.917 10:03:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.917 10:03:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.917 ************************************ 00:09:05.917 START TEST raid_read_error_test 00:09:05.917 ************************************ 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9oCqT77Lfz 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65192 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65192 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65192 ']' 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.917 10:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.917 [2024-11-19 10:03:20.034279] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:05.917 [2024-11-19 10:03:20.034478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65192 ] 00:09:06.176 [2024-11-19 10:03:20.228802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.176 [2024-11-19 10:03:20.401411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.436 [2024-11-19 10:03:20.623953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.436 [2024-11-19 10:03:20.624053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 BaseBdev1_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 true 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 [2024-11-19 10:03:21.090034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:07.005 [2024-11-19 10:03:21.090259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.005 [2024-11-19 10:03:21.090415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:07.005 [2024-11-19 10:03:21.090540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.005 [2024-11-19 10:03:21.093795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.005 BaseBdev1 00:09:07.005 [2024-11-19 10:03:21.093964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 BaseBdev2_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 true 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 [2024-11-19 10:03:21.153938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:07.005 [2024-11-19 10:03:21.154014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.005 [2024-11-19 10:03:21.154047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:07.005 [2024-11-19 10:03:21.154066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.005 [2024-11-19 10:03:21.157191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.005 BaseBdev2 00:09:07.005 [2024-11-19 10:03:21.157366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.005 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 BaseBdev3_malloc 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.006 true 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.006 [2024-11-19 10:03:21.226045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:07.006 [2024-11-19 10:03:21.226254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.006 [2024-11-19 10:03:21.226297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:07.006 [2024-11-19 10:03:21.226316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.006 [2024-11-19 10:03:21.229506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.006 BaseBdev3 00:09:07.006 [2024-11-19 10:03:21.229700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.006 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.006 [2024-11-19 10:03:21.234209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.265 [2024-11-19 10:03:21.236939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.265 [2024-11-19 10:03:21.237062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.265 [2024-11-19 10:03:21.237363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.265 [2024-11-19 10:03:21.237386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.265 [2024-11-19 10:03:21.237773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:07.265 [2024-11-19 10:03:21.238041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.265 [2024-11-19 10:03:21.238064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:07.265 [2024-11-19 10:03:21.238342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.265 "name": "raid_bdev1", 00:09:07.265 "uuid": "a128976a-dabd-42a7-a52a-4b89b444d403", 00:09:07.265 "strip_size_kb": 64, 00:09:07.265 "state": "online", 00:09:07.265 "raid_level": "raid0", 00:09:07.265 "superblock": true, 00:09:07.265 "num_base_bdevs": 3, 00:09:07.265 "num_base_bdevs_discovered": 3, 00:09:07.265 "num_base_bdevs_operational": 3, 00:09:07.265 "base_bdevs_list": [ 00:09:07.265 { 00:09:07.265 "name": "BaseBdev1", 00:09:07.265 "uuid": "4007c737-c786-59fd-a89f-a9461cddf09a", 00:09:07.265 "is_configured": true, 00:09:07.265 "data_offset": 2048, 00:09:07.265 "data_size": 63488 00:09:07.265 }, 00:09:07.265 { 00:09:07.265 "name": "BaseBdev2", 00:09:07.265 "uuid": "bc4e3c3b-a0b5-5146-8191-7ba5558d9843", 00:09:07.265 "is_configured": true, 00:09:07.265 "data_offset": 2048, 00:09:07.265 "data_size": 63488 00:09:07.265 }, 00:09:07.265 { 00:09:07.265 "name": "BaseBdev3", 00:09:07.265 "uuid": "9a937e13-ebe4-5609-ab71-480b318738ae", 00:09:07.265 "is_configured": true, 00:09:07.265 "data_offset": 2048, 00:09:07.265 "data_size": 63488 00:09:07.265 } 00:09:07.265 ] 00:09:07.265 }' 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.265 10:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.834 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:07.834 10:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:07.834 [2024-11-19 10:03:21.907989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.793 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.793 "name": "raid_bdev1", 00:09:08.793 "uuid": "a128976a-dabd-42a7-a52a-4b89b444d403", 00:09:08.793 "strip_size_kb": 64, 00:09:08.793 "state": "online", 00:09:08.793 "raid_level": "raid0", 00:09:08.793 "superblock": true, 00:09:08.793 "num_base_bdevs": 3, 00:09:08.793 "num_base_bdevs_discovered": 3, 00:09:08.793 "num_base_bdevs_operational": 3, 00:09:08.793 "base_bdevs_list": [ 00:09:08.793 { 00:09:08.793 "name": "BaseBdev1", 00:09:08.793 "uuid": "4007c737-c786-59fd-a89f-a9461cddf09a", 00:09:08.793 "is_configured": true, 00:09:08.793 "data_offset": 2048, 00:09:08.793 "data_size": 63488 00:09:08.793 }, 00:09:08.793 { 00:09:08.793 "name": "BaseBdev2", 00:09:08.793 "uuid": "bc4e3c3b-a0b5-5146-8191-7ba5558d9843", 00:09:08.793 "is_configured": true, 00:09:08.793 "data_offset": 2048, 00:09:08.793 "data_size": 63488 00:09:08.793 }, 00:09:08.793 { 00:09:08.793 "name": "BaseBdev3", 00:09:08.793 "uuid": "9a937e13-ebe4-5609-ab71-480b318738ae", 00:09:08.793 "is_configured": true, 00:09:08.793 "data_offset": 2048, 00:09:08.793 "data_size": 63488 00:09:08.793 } 00:09:08.794 ] 00:09:08.794 }' 00:09:08.794 10:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.794 10:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.075 10:03:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.075 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.075 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.333 [2024-11-19 10:03:23.312357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.333 [2024-11-19 10:03:23.312403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.333 { 00:09:09.333 "results": [ 00:09:09.333 { 00:09:09.333 "job": "raid_bdev1", 00:09:09.333 "core_mask": "0x1", 00:09:09.333 "workload": "randrw", 00:09:09.333 "percentage": 50, 00:09:09.333 "status": "finished", 00:09:09.333 "queue_depth": 1, 00:09:09.333 "io_size": 131072, 00:09:09.333 "runtime": 1.401539, 00:09:09.333 "iops": 9786.38482411121, 00:09:09.333 "mibps": 1223.2981030139013, 00:09:09.333 "io_failed": 1, 00:09:09.333 "io_timeout": 0, 00:09:09.333 "avg_latency_us": 144.39313128367587, 00:09:09.333 "min_latency_us": 43.75272727272727, 00:09:09.333 "max_latency_us": 1861.8181818181818 00:09:09.333 } 00:09:09.333 ], 00:09:09.333 "core_count": 1 00:09:09.333 } 00:09:09.333 [2024-11-19 10:03:23.315860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.333 [2024-11-19 10:03:23.315936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.333 [2024-11-19 10:03:23.315997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.333 [2024-11-19 10:03:23.316014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65192 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65192 ']' 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65192 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65192 00:09:09.333 killing process with pid 65192 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65192' 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65192 00:09:09.333 10:03:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65192 00:09:09.333 [2024-11-19 10:03:23.355017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.591 [2024-11-19 10:03:23.585370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9oCqT77Lfz 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:10.968 00:09:10.968 real 0m4.876s 00:09:10.968 user 0m6.007s 00:09:10.968 sys 0m0.642s 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.968 ************************************ 00:09:10.968 END TEST raid_read_error_test 00:09:10.968 ************************************ 00:09:10.968 10:03:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.968 10:03:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:10.968 10:03:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.968 10:03:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.968 10:03:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.968 ************************************ 00:09:10.968 START TEST raid_write_error_test 00:09:10.968 ************************************ 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:10.968 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JY1lYulnGu 00:09:10.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65338 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65338 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65338 ']' 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.969 10:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.969 [2024-11-19 10:03:24.966615] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:10.969 [2024-11-19 10:03:24.967051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65338 ] 00:09:10.969 [2024-11-19 10:03:25.145356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.228 [2024-11-19 10:03:25.331833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.486 [2024-11-19 10:03:25.559521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.486 [2024-11-19 10:03:25.559903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.745 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.745 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:11.745 10:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.745 10:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:11.745 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.745 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 BaseBdev1_malloc 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 true 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 [2024-11-19 10:03:25.993892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:12.004 [2024-11-19 10:03:25.993980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.004 [2024-11-19 10:03:25.994017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:12.004 [2024-11-19 10:03:25.994037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.004 [2024-11-19 10:03:25.997245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.004 [2024-11-19 10:03:25.997303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:12.004 BaseBdev1 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 BaseBdev2_malloc 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 true 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 [2024-11-19 10:03:26.058167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:12.004 [2024-11-19 10:03:26.058403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.004 [2024-11-19 10:03:26.058447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:12.004 [2024-11-19 10:03:26.058468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.004 [2024-11-19 10:03:26.061766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.004 [2024-11-19 10:03:26.061835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:12.004 BaseBdev2 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 BaseBdev3_malloc 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.004 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 true 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.005 [2024-11-19 10:03:26.134046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:12.005 [2024-11-19 10:03:26.134331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.005 [2024-11-19 10:03:26.134415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:12.005 [2024-11-19 10:03:26.134442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.005 [2024-11-19 10:03:26.137855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.005 [2024-11-19 10:03:26.137914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:12.005 BaseBdev3 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.005 [2024-11-19 10:03:26.142260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.005 [2024-11-19 10:03:26.145240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.005 [2024-11-19 10:03:26.145502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.005 [2024-11-19 10:03:26.146002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.005 [2024-11-19 10:03:26.146136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.005 [2024-11-19 10:03:26.146553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:12.005 [2024-11-19 10:03:26.146825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.005 [2024-11-19 10:03:26.146852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:12.005 [2024-11-19 10:03:26.147188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.005 "name": "raid_bdev1", 00:09:12.005 "uuid": "d22d9eba-b243-4053-8f06-e695edb5d943", 00:09:12.005 "strip_size_kb": 64, 00:09:12.005 "state": "online", 00:09:12.005 "raid_level": "raid0", 00:09:12.005 "superblock": true, 00:09:12.005 "num_base_bdevs": 3, 00:09:12.005 "num_base_bdevs_discovered": 3, 00:09:12.005 "num_base_bdevs_operational": 3, 00:09:12.005 "base_bdevs_list": [ 00:09:12.005 { 00:09:12.005 "name": "BaseBdev1", 00:09:12.005 "uuid": "20fd7ecd-feeb-5ab5-b08f-b9feb0bc8b69", 00:09:12.005 "is_configured": true, 00:09:12.005 "data_offset": 2048, 00:09:12.005 "data_size": 63488 00:09:12.005 }, 00:09:12.005 { 00:09:12.005 "name": "BaseBdev2", 00:09:12.005 "uuid": "5cc87ac9-1bc4-5d03-b6e6-d0dabc0d24ef", 00:09:12.005 "is_configured": true, 00:09:12.005 "data_offset": 2048, 00:09:12.005 "data_size": 63488 00:09:12.005 }, 00:09:12.005 { 00:09:12.005 "name": "BaseBdev3", 00:09:12.005 "uuid": "ed86ce40-d329-59ea-9b49-b724465eba1d", 00:09:12.005 "is_configured": true, 00:09:12.005 "data_offset": 2048, 00:09:12.005 "data_size": 63488 00:09:12.005 } 00:09:12.005 ] 00:09:12.005 }' 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.005 10:03:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.573 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:12.573 10:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:12.831 [2024-11-19 10:03:26.816845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:13.765 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:13.765 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.765 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.765 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.765 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:13.765 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.766 "name": "raid_bdev1", 00:09:13.766 "uuid": "d22d9eba-b243-4053-8f06-e695edb5d943", 00:09:13.766 "strip_size_kb": 64, 00:09:13.766 "state": "online", 00:09:13.766 "raid_level": "raid0", 00:09:13.766 "superblock": true, 00:09:13.766 "num_base_bdevs": 3, 00:09:13.766 "num_base_bdevs_discovered": 3, 00:09:13.766 "num_base_bdevs_operational": 3, 00:09:13.766 "base_bdevs_list": [ 00:09:13.766 { 00:09:13.766 "name": "BaseBdev1", 00:09:13.766 "uuid": "20fd7ecd-feeb-5ab5-b08f-b9feb0bc8b69", 00:09:13.766 "is_configured": true, 00:09:13.766 "data_offset": 2048, 00:09:13.766 "data_size": 63488 00:09:13.766 }, 00:09:13.766 { 00:09:13.766 "name": "BaseBdev2", 00:09:13.766 "uuid": "5cc87ac9-1bc4-5d03-b6e6-d0dabc0d24ef", 00:09:13.766 "is_configured": true, 00:09:13.766 "data_offset": 2048, 00:09:13.766 "data_size": 63488 00:09:13.766 }, 00:09:13.766 { 00:09:13.766 "name": "BaseBdev3", 00:09:13.766 "uuid": "ed86ce40-d329-59ea-9b49-b724465eba1d", 00:09:13.766 "is_configured": true, 00:09:13.766 "data_offset": 2048, 00:09:13.766 "data_size": 63488 00:09:13.766 } 00:09:13.766 ] 00:09:13.766 }' 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.766 10:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.060 [2024-11-19 10:03:28.168340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.060 [2024-11-19 10:03:28.168526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.060 { 00:09:14.060 "results": [ 00:09:14.060 { 00:09:14.060 "job": "raid_bdev1", 00:09:14.060 "core_mask": "0x1", 00:09:14.060 "workload": "randrw", 00:09:14.060 "percentage": 50, 00:09:14.060 "status": "finished", 00:09:14.060 "queue_depth": 1, 00:09:14.060 "io_size": 131072, 00:09:14.060 "runtime": 1.349, 00:09:14.060 "iops": 9841.363973313566, 00:09:14.060 "mibps": 1230.1704966641958, 00:09:14.060 "io_failed": 1, 00:09:14.060 "io_timeout": 0, 00:09:14.060 "avg_latency_us": 143.44359391154902, 00:09:14.060 "min_latency_us": 30.254545454545454, 00:09:14.060 "max_latency_us": 1839.4763636363637 00:09:14.060 } 00:09:14.060 ], 00:09:14.060 "core_count": 1 00:09:14.060 } 00:09:14.060 [2024-11-19 10:03:28.171938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.060 [2024-11-19 10:03:28.172001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.060 [2024-11-19 10:03:28.172078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.060 [2024-11-19 10:03:28.172095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65338 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65338 ']' 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65338 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65338 00:09:14.060 killing process with pid 65338 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65338' 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65338 00:09:14.060 10:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65338 00:09:14.060 [2024-11-19 10:03:28.211584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.345 [2024-11-19 10:03:28.438656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JY1lYulnGu 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:15.722 00:09:15.722 real 0m4.803s 00:09:15.722 user 0m5.830s 00:09:15.722 sys 0m0.662s 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.722 ************************************ 00:09:15.722 END TEST raid_write_error_test 00:09:15.722 ************************************ 00:09:15.722 10:03:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.722 10:03:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:15.722 10:03:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:15.722 10:03:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:15.722 10:03:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.722 10:03:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.722 ************************************ 00:09:15.722 START TEST raid_state_function_test 00:09:15.722 ************************************ 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:15.722 Process raid pid: 65486 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65486 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65486' 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65486 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65486 ']' 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.722 10:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.722 [2024-11-19 10:03:29.814458] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:15.722 [2024-11-19 10:03:29.814669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.982 [2024-11-19 10:03:30.001490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.982 [2024-11-19 10:03:30.150595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.240 [2024-11-19 10:03:30.380964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.240 [2024-11-19 10:03:30.381298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.849 [2024-11-19 10:03:30.828277] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.849 [2024-11-19 10:03:30.828511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.849 [2024-11-19 10:03:30.828654] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.849 [2024-11-19 10:03:30.828692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.849 [2024-11-19 10:03:30.828706] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.849 [2024-11-19 10:03:30.828723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.849 "name": "Existed_Raid", 00:09:16.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.849 "strip_size_kb": 64, 00:09:16.849 "state": "configuring", 00:09:16.849 "raid_level": "concat", 00:09:16.849 "superblock": false, 00:09:16.849 "num_base_bdevs": 3, 00:09:16.849 "num_base_bdevs_discovered": 0, 00:09:16.849 "num_base_bdevs_operational": 3, 00:09:16.849 "base_bdevs_list": [ 00:09:16.849 { 00:09:16.849 "name": "BaseBdev1", 00:09:16.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.849 "is_configured": false, 00:09:16.849 "data_offset": 0, 00:09:16.849 "data_size": 0 00:09:16.849 }, 00:09:16.849 { 00:09:16.849 "name": "BaseBdev2", 00:09:16.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.849 "is_configured": false, 00:09:16.849 "data_offset": 0, 00:09:16.849 "data_size": 0 00:09:16.849 }, 00:09:16.849 { 00:09:16.849 "name": "BaseBdev3", 00:09:16.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.849 "is_configured": false, 00:09:16.849 "data_offset": 0, 00:09:16.849 "data_size": 0 00:09:16.849 } 00:09:16.849 ] 00:09:16.849 }' 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.849 10:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.108 [2024-11-19 10:03:31.332348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.108 [2024-11-19 10:03:31.332579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.108 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.367 [2024-11-19 10:03:31.344338] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.367 [2024-11-19 10:03:31.344535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.367 [2024-11-19 10:03:31.344663] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.367 [2024-11-19 10:03:31.344727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.367 [2024-11-19 10:03:31.344928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.367 [2024-11-19 10:03:31.344993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.367 [2024-11-19 10:03:31.392888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.367 BaseBdev1 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.367 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.368 [ 00:09:17.368 { 00:09:17.368 "name": "BaseBdev1", 00:09:17.368 "aliases": [ 00:09:17.368 "53fbe130-89f7-40f3-b635-a2fa492977ca" 00:09:17.368 ], 00:09:17.368 "product_name": "Malloc disk", 00:09:17.368 "block_size": 512, 00:09:17.368 "num_blocks": 65536, 00:09:17.368 "uuid": "53fbe130-89f7-40f3-b635-a2fa492977ca", 00:09:17.368 "assigned_rate_limits": { 00:09:17.368 "rw_ios_per_sec": 0, 00:09:17.368 "rw_mbytes_per_sec": 0, 00:09:17.368 "r_mbytes_per_sec": 0, 00:09:17.368 "w_mbytes_per_sec": 0 00:09:17.368 }, 00:09:17.368 "claimed": true, 00:09:17.368 "claim_type": "exclusive_write", 00:09:17.368 "zoned": false, 00:09:17.368 "supported_io_types": { 00:09:17.368 "read": true, 00:09:17.368 "write": true, 00:09:17.368 "unmap": true, 00:09:17.368 "flush": true, 00:09:17.368 "reset": true, 00:09:17.368 "nvme_admin": false, 00:09:17.368 "nvme_io": false, 00:09:17.368 "nvme_io_md": false, 00:09:17.368 "write_zeroes": true, 00:09:17.368 "zcopy": true, 00:09:17.368 "get_zone_info": false, 00:09:17.368 "zone_management": false, 00:09:17.368 "zone_append": false, 00:09:17.368 "compare": false, 00:09:17.368 "compare_and_write": false, 00:09:17.368 "abort": true, 00:09:17.368 "seek_hole": false, 00:09:17.368 "seek_data": false, 00:09:17.368 "copy": true, 00:09:17.368 "nvme_iov_md": false 00:09:17.368 }, 00:09:17.368 "memory_domains": [ 00:09:17.368 { 00:09:17.368 "dma_device_id": "system", 00:09:17.368 "dma_device_type": 1 00:09:17.368 }, 00:09:17.368 { 00:09:17.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.368 "dma_device_type": 2 00:09:17.368 } 00:09:17.368 ], 00:09:17.368 "driver_specific": {} 00:09:17.368 } 00:09:17.368 ] 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.368 "name": "Existed_Raid", 00:09:17.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.368 "strip_size_kb": 64, 00:09:17.368 "state": "configuring", 00:09:17.368 "raid_level": "concat", 00:09:17.368 "superblock": false, 00:09:17.368 "num_base_bdevs": 3, 00:09:17.368 "num_base_bdevs_discovered": 1, 00:09:17.368 "num_base_bdevs_operational": 3, 00:09:17.368 "base_bdevs_list": [ 00:09:17.368 { 00:09:17.368 "name": "BaseBdev1", 00:09:17.368 "uuid": "53fbe130-89f7-40f3-b635-a2fa492977ca", 00:09:17.368 "is_configured": true, 00:09:17.368 "data_offset": 0, 00:09:17.368 "data_size": 65536 00:09:17.368 }, 00:09:17.368 { 00:09:17.368 "name": "BaseBdev2", 00:09:17.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.368 "is_configured": false, 00:09:17.368 "data_offset": 0, 00:09:17.368 "data_size": 0 00:09:17.368 }, 00:09:17.368 { 00:09:17.368 "name": "BaseBdev3", 00:09:17.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.368 "is_configured": false, 00:09:17.368 "data_offset": 0, 00:09:17.368 "data_size": 0 00:09:17.368 } 00:09:17.368 ] 00:09:17.368 }' 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.368 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.935 [2024-11-19 10:03:31.905110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.935 [2024-11-19 10:03:31.905328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.935 [2024-11-19 10:03:31.913362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.935 [2024-11-19 10:03:31.919097] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.935 [2024-11-19 10:03:31.919428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.935 [2024-11-19 10:03:31.919678] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.935 [2024-11-19 10:03:31.920005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.935 "name": "Existed_Raid", 00:09:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.935 "strip_size_kb": 64, 00:09:17.935 "state": "configuring", 00:09:17.935 "raid_level": "concat", 00:09:17.935 "superblock": false, 00:09:17.935 "num_base_bdevs": 3, 00:09:17.935 "num_base_bdevs_discovered": 1, 00:09:17.935 "num_base_bdevs_operational": 3, 00:09:17.935 "base_bdevs_list": [ 00:09:17.935 { 00:09:17.935 "name": "BaseBdev1", 00:09:17.935 "uuid": "53fbe130-89f7-40f3-b635-a2fa492977ca", 00:09:17.935 "is_configured": true, 00:09:17.935 "data_offset": 0, 00:09:17.935 "data_size": 65536 00:09:17.935 }, 00:09:17.935 { 00:09:17.935 "name": "BaseBdev2", 00:09:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.935 "is_configured": false, 00:09:17.935 "data_offset": 0, 00:09:17.935 "data_size": 0 00:09:17.935 }, 00:09:17.935 { 00:09:17.935 "name": "BaseBdev3", 00:09:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.935 "is_configured": false, 00:09:17.935 "data_offset": 0, 00:09:17.935 "data_size": 0 00:09:17.935 } 00:09:17.935 ] 00:09:17.935 }' 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.935 10:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.194 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.194 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.194 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.454 [2024-11-19 10:03:32.453470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.454 BaseBdev2 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.454 [ 00:09:18.454 { 00:09:18.454 "name": "BaseBdev2", 00:09:18.454 "aliases": [ 00:09:18.454 "c624fc69-9737-465b-bd7c-9b760d3ab68c" 00:09:18.454 ], 00:09:18.454 "product_name": "Malloc disk", 00:09:18.454 "block_size": 512, 00:09:18.454 "num_blocks": 65536, 00:09:18.454 "uuid": "c624fc69-9737-465b-bd7c-9b760d3ab68c", 00:09:18.454 "assigned_rate_limits": { 00:09:18.454 "rw_ios_per_sec": 0, 00:09:18.454 "rw_mbytes_per_sec": 0, 00:09:18.454 "r_mbytes_per_sec": 0, 00:09:18.454 "w_mbytes_per_sec": 0 00:09:18.454 }, 00:09:18.454 "claimed": true, 00:09:18.454 "claim_type": "exclusive_write", 00:09:18.454 "zoned": false, 00:09:18.454 "supported_io_types": { 00:09:18.454 "read": true, 00:09:18.454 "write": true, 00:09:18.454 "unmap": true, 00:09:18.454 "flush": true, 00:09:18.454 "reset": true, 00:09:18.454 "nvme_admin": false, 00:09:18.454 "nvme_io": false, 00:09:18.454 "nvme_io_md": false, 00:09:18.454 "write_zeroes": true, 00:09:18.454 "zcopy": true, 00:09:18.454 "get_zone_info": false, 00:09:18.454 "zone_management": false, 00:09:18.454 "zone_append": false, 00:09:18.454 "compare": false, 00:09:18.454 "compare_and_write": false, 00:09:18.454 "abort": true, 00:09:18.454 "seek_hole": false, 00:09:18.454 "seek_data": false, 00:09:18.454 "copy": true, 00:09:18.454 "nvme_iov_md": false 00:09:18.454 }, 00:09:18.454 "memory_domains": [ 00:09:18.454 { 00:09:18.454 "dma_device_id": "system", 00:09:18.454 "dma_device_type": 1 00:09:18.454 }, 00:09:18.454 { 00:09:18.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.454 "dma_device_type": 2 00:09:18.454 } 00:09:18.454 ], 00:09:18.454 "driver_specific": {} 00:09:18.454 } 00:09:18.454 ] 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.454 "name": "Existed_Raid", 00:09:18.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.454 "strip_size_kb": 64, 00:09:18.454 "state": "configuring", 00:09:18.454 "raid_level": "concat", 00:09:18.454 "superblock": false, 00:09:18.454 "num_base_bdevs": 3, 00:09:18.454 "num_base_bdevs_discovered": 2, 00:09:18.454 "num_base_bdevs_operational": 3, 00:09:18.454 "base_bdevs_list": [ 00:09:18.454 { 00:09:18.454 "name": "BaseBdev1", 00:09:18.454 "uuid": "53fbe130-89f7-40f3-b635-a2fa492977ca", 00:09:18.454 "is_configured": true, 00:09:18.454 "data_offset": 0, 00:09:18.454 "data_size": 65536 00:09:18.454 }, 00:09:18.454 { 00:09:18.454 "name": "BaseBdev2", 00:09:18.454 "uuid": "c624fc69-9737-465b-bd7c-9b760d3ab68c", 00:09:18.454 "is_configured": true, 00:09:18.454 "data_offset": 0, 00:09:18.454 "data_size": 65536 00:09:18.454 }, 00:09:18.454 { 00:09:18.454 "name": "BaseBdev3", 00:09:18.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.454 "is_configured": false, 00:09:18.454 "data_offset": 0, 00:09:18.454 "data_size": 0 00:09:18.454 } 00:09:18.454 ] 00:09:18.454 }' 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.454 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.022 10:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.022 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.022 10:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.022 [2024-11-19 10:03:33.047661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.022 BaseBdev3 00:09:19.022 [2024-11-19 10:03:33.048030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.022 [2024-11-19 10:03:33.048089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:19.022 [2024-11-19 10:03:33.048559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.022 [2024-11-19 10:03:33.048860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.022 [2024-11-19 10:03:33.048883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:19.022 [2024-11-19 10:03:33.049401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.022 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.023 [ 00:09:19.023 { 00:09:19.023 "name": "BaseBdev3", 00:09:19.023 "aliases": [ 00:09:19.023 "a555abb1-7faa-408c-8d9d-7d109323f81a" 00:09:19.023 ], 00:09:19.023 "product_name": "Malloc disk", 00:09:19.023 "block_size": 512, 00:09:19.023 "num_blocks": 65536, 00:09:19.023 "uuid": "a555abb1-7faa-408c-8d9d-7d109323f81a", 00:09:19.023 "assigned_rate_limits": { 00:09:19.023 "rw_ios_per_sec": 0, 00:09:19.023 "rw_mbytes_per_sec": 0, 00:09:19.023 "r_mbytes_per_sec": 0, 00:09:19.023 "w_mbytes_per_sec": 0 00:09:19.023 }, 00:09:19.023 "claimed": true, 00:09:19.023 "claim_type": "exclusive_write", 00:09:19.023 "zoned": false, 00:09:19.023 "supported_io_types": { 00:09:19.023 "read": true, 00:09:19.023 "write": true, 00:09:19.023 "unmap": true, 00:09:19.023 "flush": true, 00:09:19.023 "reset": true, 00:09:19.023 "nvme_admin": false, 00:09:19.023 "nvme_io": false, 00:09:19.023 "nvme_io_md": false, 00:09:19.023 "write_zeroes": true, 00:09:19.023 "zcopy": true, 00:09:19.023 "get_zone_info": false, 00:09:19.023 "zone_management": false, 00:09:19.023 "zone_append": false, 00:09:19.023 "compare": false, 00:09:19.023 "compare_and_write": false, 00:09:19.023 "abort": true, 00:09:19.023 "seek_hole": false, 00:09:19.023 "seek_data": false, 00:09:19.023 "copy": true, 00:09:19.023 "nvme_iov_md": false 00:09:19.023 }, 00:09:19.023 "memory_domains": [ 00:09:19.023 { 00:09:19.023 "dma_device_id": "system", 00:09:19.023 "dma_device_type": 1 00:09:19.023 }, 00:09:19.023 { 00:09:19.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.023 "dma_device_type": 2 00:09:19.023 } 00:09:19.023 ], 00:09:19.023 "driver_specific": {} 00:09:19.023 } 00:09:19.023 ] 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.023 "name": "Existed_Raid", 00:09:19.023 "uuid": "54cfdb45-449f-47fb-8fe6-18cad844cbbb", 00:09:19.023 "strip_size_kb": 64, 00:09:19.023 "state": "online", 00:09:19.023 "raid_level": "concat", 00:09:19.023 "superblock": false, 00:09:19.023 "num_base_bdevs": 3, 00:09:19.023 "num_base_bdevs_discovered": 3, 00:09:19.023 "num_base_bdevs_operational": 3, 00:09:19.023 "base_bdevs_list": [ 00:09:19.023 { 00:09:19.023 "name": "BaseBdev1", 00:09:19.023 "uuid": "53fbe130-89f7-40f3-b635-a2fa492977ca", 00:09:19.023 "is_configured": true, 00:09:19.023 "data_offset": 0, 00:09:19.023 "data_size": 65536 00:09:19.023 }, 00:09:19.023 { 00:09:19.023 "name": "BaseBdev2", 00:09:19.023 "uuid": "c624fc69-9737-465b-bd7c-9b760d3ab68c", 00:09:19.023 "is_configured": true, 00:09:19.023 "data_offset": 0, 00:09:19.023 "data_size": 65536 00:09:19.023 }, 00:09:19.023 { 00:09:19.023 "name": "BaseBdev3", 00:09:19.023 "uuid": "a555abb1-7faa-408c-8d9d-7d109323f81a", 00:09:19.023 "is_configured": true, 00:09:19.023 "data_offset": 0, 00:09:19.023 "data_size": 65536 00:09:19.023 } 00:09:19.023 ] 00:09:19.023 }' 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.023 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.654 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.654 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.654 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.654 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 [2024-11-19 10:03:33.629150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.655 "name": "Existed_Raid", 00:09:19.655 "aliases": [ 00:09:19.655 "54cfdb45-449f-47fb-8fe6-18cad844cbbb" 00:09:19.655 ], 00:09:19.655 "product_name": "Raid Volume", 00:09:19.655 "block_size": 512, 00:09:19.655 "num_blocks": 196608, 00:09:19.655 "uuid": "54cfdb45-449f-47fb-8fe6-18cad844cbbb", 00:09:19.655 "assigned_rate_limits": { 00:09:19.655 "rw_ios_per_sec": 0, 00:09:19.655 "rw_mbytes_per_sec": 0, 00:09:19.655 "r_mbytes_per_sec": 0, 00:09:19.655 "w_mbytes_per_sec": 0 00:09:19.655 }, 00:09:19.655 "claimed": false, 00:09:19.655 "zoned": false, 00:09:19.655 "supported_io_types": { 00:09:19.655 "read": true, 00:09:19.655 "write": true, 00:09:19.655 "unmap": true, 00:09:19.655 "flush": true, 00:09:19.655 "reset": true, 00:09:19.655 "nvme_admin": false, 00:09:19.655 "nvme_io": false, 00:09:19.655 "nvme_io_md": false, 00:09:19.655 "write_zeroes": true, 00:09:19.655 "zcopy": false, 00:09:19.655 "get_zone_info": false, 00:09:19.655 "zone_management": false, 00:09:19.655 "zone_append": false, 00:09:19.655 "compare": false, 00:09:19.655 "compare_and_write": false, 00:09:19.655 "abort": false, 00:09:19.655 "seek_hole": false, 00:09:19.655 "seek_data": false, 00:09:19.655 "copy": false, 00:09:19.655 "nvme_iov_md": false 00:09:19.655 }, 00:09:19.655 "memory_domains": [ 00:09:19.655 { 00:09:19.655 "dma_device_id": "system", 00:09:19.655 "dma_device_type": 1 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.655 "dma_device_type": 2 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "dma_device_id": "system", 00:09:19.655 "dma_device_type": 1 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.655 "dma_device_type": 2 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "dma_device_id": "system", 00:09:19.655 "dma_device_type": 1 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.655 "dma_device_type": 2 00:09:19.655 } 00:09:19.655 ], 00:09:19.655 "driver_specific": { 00:09:19.655 "raid": { 00:09:19.655 "uuid": "54cfdb45-449f-47fb-8fe6-18cad844cbbb", 00:09:19.655 "strip_size_kb": 64, 00:09:19.655 "state": "online", 00:09:19.655 "raid_level": "concat", 00:09:19.655 "superblock": false, 00:09:19.655 "num_base_bdevs": 3, 00:09:19.655 "num_base_bdevs_discovered": 3, 00:09:19.655 "num_base_bdevs_operational": 3, 00:09:19.655 "base_bdevs_list": [ 00:09:19.655 { 00:09:19.655 "name": "BaseBdev1", 00:09:19.655 "uuid": "53fbe130-89f7-40f3-b635-a2fa492977ca", 00:09:19.655 "is_configured": true, 00:09:19.655 "data_offset": 0, 00:09:19.655 "data_size": 65536 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "name": "BaseBdev2", 00:09:19.655 "uuid": "c624fc69-9737-465b-bd7c-9b760d3ab68c", 00:09:19.655 "is_configured": true, 00:09:19.655 "data_offset": 0, 00:09:19.655 "data_size": 65536 00:09:19.655 }, 00:09:19.655 { 00:09:19.655 "name": "BaseBdev3", 00:09:19.655 "uuid": "a555abb1-7faa-408c-8d9d-7d109323f81a", 00:09:19.655 "is_configured": true, 00:09:19.655 "data_offset": 0, 00:09:19.655 "data_size": 65536 00:09:19.655 } 00:09:19.655 ] 00:09:19.655 } 00:09:19.655 } 00:09:19.655 }' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:19.655 BaseBdev2 00:09:19.655 BaseBdev3' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.914 10:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 [2024-11-19 10:03:33.944915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.914 [2024-11-19 10:03:33.945091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.914 [2024-11-19 10:03:33.945293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.914 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.914 "name": "Existed_Raid", 00:09:19.914 "uuid": "54cfdb45-449f-47fb-8fe6-18cad844cbbb", 00:09:19.914 "strip_size_kb": 64, 00:09:19.914 "state": "offline", 00:09:19.914 "raid_level": "concat", 00:09:19.914 "superblock": false, 00:09:19.914 "num_base_bdevs": 3, 00:09:19.914 "num_base_bdevs_discovered": 2, 00:09:19.914 "num_base_bdevs_operational": 2, 00:09:19.914 "base_bdevs_list": [ 00:09:19.914 { 00:09:19.914 "name": null, 00:09:19.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.914 "is_configured": false, 00:09:19.914 "data_offset": 0, 00:09:19.914 "data_size": 65536 00:09:19.914 }, 00:09:19.914 { 00:09:19.914 "name": "BaseBdev2", 00:09:19.914 "uuid": "c624fc69-9737-465b-bd7c-9b760d3ab68c", 00:09:19.914 "is_configured": true, 00:09:19.915 "data_offset": 0, 00:09:19.915 "data_size": 65536 00:09:19.915 }, 00:09:19.915 { 00:09:19.915 "name": "BaseBdev3", 00:09:19.915 "uuid": "a555abb1-7faa-408c-8d9d-7d109323f81a", 00:09:19.915 "is_configured": true, 00:09:19.915 "data_offset": 0, 00:09:19.915 "data_size": 65536 00:09:19.915 } 00:09:19.915 ] 00:09:19.915 }' 00:09:19.915 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.915 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.483 [2024-11-19 10:03:34.611319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.483 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 [2024-11-19 10:03:34.773667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.742 [2024-11-19 10:03:34.773745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 BaseBdev2 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.742 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.001 [ 00:09:21.001 { 00:09:21.001 "name": "BaseBdev2", 00:09:21.001 "aliases": [ 00:09:21.001 "f6d9bd0d-1589-4bcc-9933-a195b39d4f95" 00:09:21.001 ], 00:09:21.001 "product_name": "Malloc disk", 00:09:21.001 "block_size": 512, 00:09:21.001 "num_blocks": 65536, 00:09:21.001 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:21.001 "assigned_rate_limits": { 00:09:21.001 "rw_ios_per_sec": 0, 00:09:21.001 "rw_mbytes_per_sec": 0, 00:09:21.001 "r_mbytes_per_sec": 0, 00:09:21.001 "w_mbytes_per_sec": 0 00:09:21.001 }, 00:09:21.001 "claimed": false, 00:09:21.001 "zoned": false, 00:09:21.001 "supported_io_types": { 00:09:21.001 "read": true, 00:09:21.001 "write": true, 00:09:21.001 "unmap": true, 00:09:21.001 "flush": true, 00:09:21.001 "reset": true, 00:09:21.001 "nvme_admin": false, 00:09:21.001 "nvme_io": false, 00:09:21.001 "nvme_io_md": false, 00:09:21.001 "write_zeroes": true, 00:09:21.001 "zcopy": true, 00:09:21.001 "get_zone_info": false, 00:09:21.001 "zone_management": false, 00:09:21.001 "zone_append": false, 00:09:21.001 "compare": false, 00:09:21.001 "compare_and_write": false, 00:09:21.001 "abort": true, 00:09:21.001 "seek_hole": false, 00:09:21.001 "seek_data": false, 00:09:21.001 "copy": true, 00:09:21.001 "nvme_iov_md": false 00:09:21.001 }, 00:09:21.001 "memory_domains": [ 00:09:21.001 { 00:09:21.001 "dma_device_id": "system", 00:09:21.001 "dma_device_type": 1 00:09:21.001 }, 00:09:21.001 { 00:09:21.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.001 "dma_device_type": 2 00:09:21.001 } 00:09:21.001 ], 00:09:21.001 "driver_specific": {} 00:09:21.001 } 00:09:21.001 ] 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.001 10:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.001 BaseBdev3 00:09:21.001 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.001 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.001 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.002 [ 00:09:21.002 { 00:09:21.002 "name": "BaseBdev3", 00:09:21.002 "aliases": [ 00:09:21.002 "518727a0-8273-4e11-80a4-adfc16c3c16f" 00:09:21.002 ], 00:09:21.002 "product_name": "Malloc disk", 00:09:21.002 "block_size": 512, 00:09:21.002 "num_blocks": 65536, 00:09:21.002 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:21.002 "assigned_rate_limits": { 00:09:21.002 "rw_ios_per_sec": 0, 00:09:21.002 "rw_mbytes_per_sec": 0, 00:09:21.002 "r_mbytes_per_sec": 0, 00:09:21.002 "w_mbytes_per_sec": 0 00:09:21.002 }, 00:09:21.002 "claimed": false, 00:09:21.002 "zoned": false, 00:09:21.002 "supported_io_types": { 00:09:21.002 "read": true, 00:09:21.002 "write": true, 00:09:21.002 "unmap": true, 00:09:21.002 "flush": true, 00:09:21.002 "reset": true, 00:09:21.002 "nvme_admin": false, 00:09:21.002 "nvme_io": false, 00:09:21.002 "nvme_io_md": false, 00:09:21.002 "write_zeroes": true, 00:09:21.002 "zcopy": true, 00:09:21.002 "get_zone_info": false, 00:09:21.002 "zone_management": false, 00:09:21.002 "zone_append": false, 00:09:21.002 "compare": false, 00:09:21.002 "compare_and_write": false, 00:09:21.002 "abort": true, 00:09:21.002 "seek_hole": false, 00:09:21.002 "seek_data": false, 00:09:21.002 "copy": true, 00:09:21.002 "nvme_iov_md": false 00:09:21.002 }, 00:09:21.002 "memory_domains": [ 00:09:21.002 { 00:09:21.002 "dma_device_id": "system", 00:09:21.002 "dma_device_type": 1 00:09:21.002 }, 00:09:21.002 { 00:09:21.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.002 "dma_device_type": 2 00:09:21.002 } 00:09:21.002 ], 00:09:21.002 "driver_specific": {} 00:09:21.002 } 00:09:21.002 ] 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.002 [2024-11-19 10:03:35.081191] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.002 [2024-11-19 10:03:35.081385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.002 [2024-11-19 10:03:35.081529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.002 [2024-11-19 10:03:35.084333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.002 "name": "Existed_Raid", 00:09:21.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.002 "strip_size_kb": 64, 00:09:21.002 "state": "configuring", 00:09:21.002 "raid_level": "concat", 00:09:21.002 "superblock": false, 00:09:21.002 "num_base_bdevs": 3, 00:09:21.002 "num_base_bdevs_discovered": 2, 00:09:21.002 "num_base_bdevs_operational": 3, 00:09:21.002 "base_bdevs_list": [ 00:09:21.002 { 00:09:21.002 "name": "BaseBdev1", 00:09:21.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.002 "is_configured": false, 00:09:21.002 "data_offset": 0, 00:09:21.002 "data_size": 0 00:09:21.002 }, 00:09:21.002 { 00:09:21.002 "name": "BaseBdev2", 00:09:21.002 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:21.002 "is_configured": true, 00:09:21.002 "data_offset": 0, 00:09:21.002 "data_size": 65536 00:09:21.002 }, 00:09:21.002 { 00:09:21.002 "name": "BaseBdev3", 00:09:21.002 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:21.002 "is_configured": true, 00:09:21.002 "data_offset": 0, 00:09:21.002 "data_size": 65536 00:09:21.002 } 00:09:21.002 ] 00:09:21.002 }' 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.002 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.570 [2024-11-19 10:03:35.609306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.570 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.571 "name": "Existed_Raid", 00:09:21.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.571 "strip_size_kb": 64, 00:09:21.571 "state": "configuring", 00:09:21.571 "raid_level": "concat", 00:09:21.571 "superblock": false, 00:09:21.571 "num_base_bdevs": 3, 00:09:21.571 "num_base_bdevs_discovered": 1, 00:09:21.571 "num_base_bdevs_operational": 3, 00:09:21.571 "base_bdevs_list": [ 00:09:21.571 { 00:09:21.571 "name": "BaseBdev1", 00:09:21.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.571 "is_configured": false, 00:09:21.571 "data_offset": 0, 00:09:21.571 "data_size": 0 00:09:21.571 }, 00:09:21.571 { 00:09:21.571 "name": null, 00:09:21.571 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:21.571 "is_configured": false, 00:09:21.571 "data_offset": 0, 00:09:21.571 "data_size": 65536 00:09:21.571 }, 00:09:21.571 { 00:09:21.571 "name": "BaseBdev3", 00:09:21.571 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:21.571 "is_configured": true, 00:09:21.571 "data_offset": 0, 00:09:21.571 "data_size": 65536 00:09:21.571 } 00:09:21.571 ] 00:09:21.571 }' 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.571 10:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 [2024-11-19 10:03:36.203258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.139 BaseBdev1 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 [ 00:09:22.139 { 00:09:22.139 "name": "BaseBdev1", 00:09:22.139 "aliases": [ 00:09:22.139 "0631d011-9996-401e-8e74-d68514ade24e" 00:09:22.139 ], 00:09:22.139 "product_name": "Malloc disk", 00:09:22.139 "block_size": 512, 00:09:22.139 "num_blocks": 65536, 00:09:22.139 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:22.139 "assigned_rate_limits": { 00:09:22.139 "rw_ios_per_sec": 0, 00:09:22.139 "rw_mbytes_per_sec": 0, 00:09:22.139 "r_mbytes_per_sec": 0, 00:09:22.139 "w_mbytes_per_sec": 0 00:09:22.139 }, 00:09:22.139 "claimed": true, 00:09:22.139 "claim_type": "exclusive_write", 00:09:22.139 "zoned": false, 00:09:22.139 "supported_io_types": { 00:09:22.139 "read": true, 00:09:22.139 "write": true, 00:09:22.139 "unmap": true, 00:09:22.139 "flush": true, 00:09:22.139 "reset": true, 00:09:22.139 "nvme_admin": false, 00:09:22.139 "nvme_io": false, 00:09:22.139 "nvme_io_md": false, 00:09:22.139 "write_zeroes": true, 00:09:22.139 "zcopy": true, 00:09:22.139 "get_zone_info": false, 00:09:22.139 "zone_management": false, 00:09:22.139 "zone_append": false, 00:09:22.139 "compare": false, 00:09:22.139 "compare_and_write": false, 00:09:22.139 "abort": true, 00:09:22.139 "seek_hole": false, 00:09:22.139 "seek_data": false, 00:09:22.139 "copy": true, 00:09:22.139 "nvme_iov_md": false 00:09:22.139 }, 00:09:22.139 "memory_domains": [ 00:09:22.139 { 00:09:22.139 "dma_device_id": "system", 00:09:22.139 "dma_device_type": 1 00:09:22.139 }, 00:09:22.139 { 00:09:22.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.139 "dma_device_type": 2 00:09:22.139 } 00:09:22.139 ], 00:09:22.139 "driver_specific": {} 00:09:22.139 } 00:09:22.139 ] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.139 "name": "Existed_Raid", 00:09:22.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.139 "strip_size_kb": 64, 00:09:22.139 "state": "configuring", 00:09:22.139 "raid_level": "concat", 00:09:22.139 "superblock": false, 00:09:22.139 "num_base_bdevs": 3, 00:09:22.139 "num_base_bdevs_discovered": 2, 00:09:22.139 "num_base_bdevs_operational": 3, 00:09:22.139 "base_bdevs_list": [ 00:09:22.139 { 00:09:22.139 "name": "BaseBdev1", 00:09:22.139 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:22.139 "is_configured": true, 00:09:22.139 "data_offset": 0, 00:09:22.139 "data_size": 65536 00:09:22.139 }, 00:09:22.139 { 00:09:22.139 "name": null, 00:09:22.139 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:22.139 "is_configured": false, 00:09:22.139 "data_offset": 0, 00:09:22.139 "data_size": 65536 00:09:22.139 }, 00:09:22.139 { 00:09:22.139 "name": "BaseBdev3", 00:09:22.139 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:22.139 "is_configured": true, 00:09:22.139 "data_offset": 0, 00:09:22.139 "data_size": 65536 00:09:22.139 } 00:09:22.139 ] 00:09:22.139 }' 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.139 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.707 [2024-11-19 10:03:36.795521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.707 "name": "Existed_Raid", 00:09:22.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.707 "strip_size_kb": 64, 00:09:22.707 "state": "configuring", 00:09:22.707 "raid_level": "concat", 00:09:22.707 "superblock": false, 00:09:22.707 "num_base_bdevs": 3, 00:09:22.707 "num_base_bdevs_discovered": 1, 00:09:22.707 "num_base_bdevs_operational": 3, 00:09:22.707 "base_bdevs_list": [ 00:09:22.707 { 00:09:22.707 "name": "BaseBdev1", 00:09:22.707 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:22.707 "is_configured": true, 00:09:22.707 "data_offset": 0, 00:09:22.707 "data_size": 65536 00:09:22.707 }, 00:09:22.707 { 00:09:22.707 "name": null, 00:09:22.707 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:22.707 "is_configured": false, 00:09:22.707 "data_offset": 0, 00:09:22.707 "data_size": 65536 00:09:22.707 }, 00:09:22.707 { 00:09:22.707 "name": null, 00:09:22.707 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:22.707 "is_configured": false, 00:09:22.707 "data_offset": 0, 00:09:22.707 "data_size": 65536 00:09:22.707 } 00:09:22.707 ] 00:09:22.707 }' 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.707 10:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.276 [2024-11-19 10:03:37.407776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.276 "name": "Existed_Raid", 00:09:23.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.276 "strip_size_kb": 64, 00:09:23.276 "state": "configuring", 00:09:23.276 "raid_level": "concat", 00:09:23.276 "superblock": false, 00:09:23.276 "num_base_bdevs": 3, 00:09:23.276 "num_base_bdevs_discovered": 2, 00:09:23.276 "num_base_bdevs_operational": 3, 00:09:23.276 "base_bdevs_list": [ 00:09:23.276 { 00:09:23.276 "name": "BaseBdev1", 00:09:23.276 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:23.276 "is_configured": true, 00:09:23.276 "data_offset": 0, 00:09:23.276 "data_size": 65536 00:09:23.276 }, 00:09:23.276 { 00:09:23.276 "name": null, 00:09:23.276 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:23.276 "is_configured": false, 00:09:23.276 "data_offset": 0, 00:09:23.276 "data_size": 65536 00:09:23.276 }, 00:09:23.276 { 00:09:23.276 "name": "BaseBdev3", 00:09:23.276 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:23.276 "is_configured": true, 00:09:23.276 "data_offset": 0, 00:09:23.276 "data_size": 65536 00:09:23.276 } 00:09:23.276 ] 00:09:23.276 }' 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.276 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.845 10:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.845 [2024-11-19 10:03:37.975925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.845 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.104 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.104 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.104 "name": "Existed_Raid", 00:09:24.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.104 "strip_size_kb": 64, 00:09:24.104 "state": "configuring", 00:09:24.104 "raid_level": "concat", 00:09:24.104 "superblock": false, 00:09:24.104 "num_base_bdevs": 3, 00:09:24.104 "num_base_bdevs_discovered": 1, 00:09:24.104 "num_base_bdevs_operational": 3, 00:09:24.104 "base_bdevs_list": [ 00:09:24.104 { 00:09:24.104 "name": null, 00:09:24.104 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:24.104 "is_configured": false, 00:09:24.104 "data_offset": 0, 00:09:24.104 "data_size": 65536 00:09:24.104 }, 00:09:24.104 { 00:09:24.104 "name": null, 00:09:24.104 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:24.104 "is_configured": false, 00:09:24.104 "data_offset": 0, 00:09:24.104 "data_size": 65536 00:09:24.104 }, 00:09:24.104 { 00:09:24.104 "name": "BaseBdev3", 00:09:24.104 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:24.104 "is_configured": true, 00:09:24.104 "data_offset": 0, 00:09:24.104 "data_size": 65536 00:09:24.104 } 00:09:24.104 ] 00:09:24.104 }' 00:09:24.104 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.104 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.698 [2024-11-19 10:03:38.672706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.698 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.699 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.699 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.699 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.699 "name": "Existed_Raid", 00:09:24.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.699 "strip_size_kb": 64, 00:09:24.699 "state": "configuring", 00:09:24.699 "raid_level": "concat", 00:09:24.699 "superblock": false, 00:09:24.699 "num_base_bdevs": 3, 00:09:24.699 "num_base_bdevs_discovered": 2, 00:09:24.699 "num_base_bdevs_operational": 3, 00:09:24.699 "base_bdevs_list": [ 00:09:24.699 { 00:09:24.699 "name": null, 00:09:24.699 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:24.699 "is_configured": false, 00:09:24.699 "data_offset": 0, 00:09:24.699 "data_size": 65536 00:09:24.699 }, 00:09:24.699 { 00:09:24.699 "name": "BaseBdev2", 00:09:24.699 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:24.699 "is_configured": true, 00:09:24.699 "data_offset": 0, 00:09:24.699 "data_size": 65536 00:09:24.699 }, 00:09:24.699 { 00:09:24.699 "name": "BaseBdev3", 00:09:24.699 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:24.699 "is_configured": true, 00:09:24.699 "data_offset": 0, 00:09:24.699 "data_size": 65536 00:09:24.699 } 00:09:24.699 ] 00:09:24.699 }' 00:09:24.699 10:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.699 10:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0631d011-9996-401e-8e74-d68514ade24e 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.266 [2024-11-19 10:03:39.371717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:25.266 [2024-11-19 10:03:39.371833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.266 [2024-11-19 10:03:39.371853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:25.266 [2024-11-19 10:03:39.372216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:25.266 [2024-11-19 10:03:39.372431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.266 [2024-11-19 10:03:39.372454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:25.266 [2024-11-19 10:03:39.372835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.266 NewBaseBdev 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.266 [ 00:09:25.266 { 00:09:25.266 "name": "NewBaseBdev", 00:09:25.266 "aliases": [ 00:09:25.266 "0631d011-9996-401e-8e74-d68514ade24e" 00:09:25.266 ], 00:09:25.266 "product_name": "Malloc disk", 00:09:25.266 "block_size": 512, 00:09:25.266 "num_blocks": 65536, 00:09:25.266 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:25.266 "assigned_rate_limits": { 00:09:25.266 "rw_ios_per_sec": 0, 00:09:25.266 "rw_mbytes_per_sec": 0, 00:09:25.266 "r_mbytes_per_sec": 0, 00:09:25.266 "w_mbytes_per_sec": 0 00:09:25.266 }, 00:09:25.266 "claimed": true, 00:09:25.266 "claim_type": "exclusive_write", 00:09:25.266 "zoned": false, 00:09:25.266 "supported_io_types": { 00:09:25.266 "read": true, 00:09:25.266 "write": true, 00:09:25.266 "unmap": true, 00:09:25.266 "flush": true, 00:09:25.266 "reset": true, 00:09:25.266 "nvme_admin": false, 00:09:25.266 "nvme_io": false, 00:09:25.266 "nvme_io_md": false, 00:09:25.266 "write_zeroes": true, 00:09:25.266 "zcopy": true, 00:09:25.266 "get_zone_info": false, 00:09:25.266 "zone_management": false, 00:09:25.266 "zone_append": false, 00:09:25.266 "compare": false, 00:09:25.266 "compare_and_write": false, 00:09:25.266 "abort": true, 00:09:25.266 "seek_hole": false, 00:09:25.266 "seek_data": false, 00:09:25.266 "copy": true, 00:09:25.266 "nvme_iov_md": false 00:09:25.266 }, 00:09:25.266 "memory_domains": [ 00:09:25.266 { 00:09:25.266 "dma_device_id": "system", 00:09:25.266 "dma_device_type": 1 00:09:25.266 }, 00:09:25.266 { 00:09:25.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.266 "dma_device_type": 2 00:09:25.266 } 00:09:25.266 ], 00:09:25.266 "driver_specific": {} 00:09:25.266 } 00:09:25.266 ] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.266 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.267 "name": "Existed_Raid", 00:09:25.267 "uuid": "fefdac2f-71f9-4ef7-989b-671bc77a88d8", 00:09:25.267 "strip_size_kb": 64, 00:09:25.267 "state": "online", 00:09:25.267 "raid_level": "concat", 00:09:25.267 "superblock": false, 00:09:25.267 "num_base_bdevs": 3, 00:09:25.267 "num_base_bdevs_discovered": 3, 00:09:25.267 "num_base_bdevs_operational": 3, 00:09:25.267 "base_bdevs_list": [ 00:09:25.267 { 00:09:25.267 "name": "NewBaseBdev", 00:09:25.267 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:25.267 "is_configured": true, 00:09:25.267 "data_offset": 0, 00:09:25.267 "data_size": 65536 00:09:25.267 }, 00:09:25.267 { 00:09:25.267 "name": "BaseBdev2", 00:09:25.267 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:25.267 "is_configured": true, 00:09:25.267 "data_offset": 0, 00:09:25.267 "data_size": 65536 00:09:25.267 }, 00:09:25.267 { 00:09:25.267 "name": "BaseBdev3", 00:09:25.267 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:25.267 "is_configured": true, 00:09:25.267 "data_offset": 0, 00:09:25.267 "data_size": 65536 00:09:25.267 } 00:09:25.267 ] 00:09:25.267 }' 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.267 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.833 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.833 [2024-11-19 10:03:39.924356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.834 10:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.834 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.834 "name": "Existed_Raid", 00:09:25.834 "aliases": [ 00:09:25.834 "fefdac2f-71f9-4ef7-989b-671bc77a88d8" 00:09:25.834 ], 00:09:25.834 "product_name": "Raid Volume", 00:09:25.834 "block_size": 512, 00:09:25.834 "num_blocks": 196608, 00:09:25.834 "uuid": "fefdac2f-71f9-4ef7-989b-671bc77a88d8", 00:09:25.834 "assigned_rate_limits": { 00:09:25.834 "rw_ios_per_sec": 0, 00:09:25.834 "rw_mbytes_per_sec": 0, 00:09:25.834 "r_mbytes_per_sec": 0, 00:09:25.834 "w_mbytes_per_sec": 0 00:09:25.834 }, 00:09:25.834 "claimed": false, 00:09:25.834 "zoned": false, 00:09:25.834 "supported_io_types": { 00:09:25.834 "read": true, 00:09:25.834 "write": true, 00:09:25.834 "unmap": true, 00:09:25.834 "flush": true, 00:09:25.834 "reset": true, 00:09:25.834 "nvme_admin": false, 00:09:25.834 "nvme_io": false, 00:09:25.834 "nvme_io_md": false, 00:09:25.834 "write_zeroes": true, 00:09:25.834 "zcopy": false, 00:09:25.834 "get_zone_info": false, 00:09:25.834 "zone_management": false, 00:09:25.834 "zone_append": false, 00:09:25.834 "compare": false, 00:09:25.834 "compare_and_write": false, 00:09:25.834 "abort": false, 00:09:25.834 "seek_hole": false, 00:09:25.834 "seek_data": false, 00:09:25.834 "copy": false, 00:09:25.834 "nvme_iov_md": false 00:09:25.834 }, 00:09:25.834 "memory_domains": [ 00:09:25.834 { 00:09:25.834 "dma_device_id": "system", 00:09:25.834 "dma_device_type": 1 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.834 "dma_device_type": 2 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "dma_device_id": "system", 00:09:25.834 "dma_device_type": 1 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.834 "dma_device_type": 2 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "dma_device_id": "system", 00:09:25.834 "dma_device_type": 1 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.834 "dma_device_type": 2 00:09:25.834 } 00:09:25.834 ], 00:09:25.834 "driver_specific": { 00:09:25.834 "raid": { 00:09:25.834 "uuid": "fefdac2f-71f9-4ef7-989b-671bc77a88d8", 00:09:25.834 "strip_size_kb": 64, 00:09:25.834 "state": "online", 00:09:25.834 "raid_level": "concat", 00:09:25.834 "superblock": false, 00:09:25.834 "num_base_bdevs": 3, 00:09:25.834 "num_base_bdevs_discovered": 3, 00:09:25.834 "num_base_bdevs_operational": 3, 00:09:25.834 "base_bdevs_list": [ 00:09:25.834 { 00:09:25.834 "name": "NewBaseBdev", 00:09:25.834 "uuid": "0631d011-9996-401e-8e74-d68514ade24e", 00:09:25.834 "is_configured": true, 00:09:25.834 "data_offset": 0, 00:09:25.834 "data_size": 65536 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "name": "BaseBdev2", 00:09:25.834 "uuid": "f6d9bd0d-1589-4bcc-9933-a195b39d4f95", 00:09:25.834 "is_configured": true, 00:09:25.834 "data_offset": 0, 00:09:25.834 "data_size": 65536 00:09:25.834 }, 00:09:25.834 { 00:09:25.834 "name": "BaseBdev3", 00:09:25.834 "uuid": "518727a0-8273-4e11-80a4-adfc16c3c16f", 00:09:25.834 "is_configured": true, 00:09:25.834 "data_offset": 0, 00:09:25.834 "data_size": 65536 00:09:25.834 } 00:09:25.834 ] 00:09:25.834 } 00:09:25.834 } 00:09:25.834 }' 00:09:25.834 10:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.834 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:25.834 BaseBdev2 00:09:25.834 BaseBdev3' 00:09:25.834 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.834 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.834 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.093 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.094 [2024-11-19 10:03:40.220030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.094 [2024-11-19 10:03:40.220211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.094 [2024-11-19 10:03:40.220473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.094 [2024-11-19 10:03:40.220671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.094 [2024-11-19 10:03:40.220819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65486 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65486 ']' 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65486 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65486 00:09:26.094 killing process with pid 65486 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65486' 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65486 00:09:26.094 10:03:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65486 00:09:26.094 [2024-11-19 10:03:40.257046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.353 [2024-11-19 10:03:40.543381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:27.730 00:09:27.730 real 0m11.992s 00:09:27.730 user 0m19.538s 00:09:27.730 sys 0m1.794s 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.730 ************************************ 00:09:27.730 END TEST raid_state_function_test 00:09:27.730 ************************************ 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.730 10:03:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:27.730 10:03:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.730 10:03:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.730 10:03:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.730 ************************************ 00:09:27.730 START TEST raid_state_function_test_sb 00:09:27.730 ************************************ 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66127 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66127' 00:09:27.730 Process raid pid: 66127 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66127 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66127 ']' 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.730 10:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.730 [2024-11-19 10:03:41.849877] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:27.730 [2024-11-19 10:03:41.850076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.989 [2024-11-19 10:03:42.048259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.247 [2024-11-19 10:03:42.233228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.507 [2024-11-19 10:03:42.506204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.507 [2024-11-19 10:03:42.506274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.766 [2024-11-19 10:03:42.937138] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.766 [2024-11-19 10:03:42.937381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.766 [2024-11-19 10:03:42.937412] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.766 [2024-11-19 10:03:42.937431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.766 [2024-11-19 10:03:42.937443] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.766 [2024-11-19 10:03:42.937458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.766 10:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.025 10:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.025 "name": "Existed_Raid", 00:09:29.025 "uuid": "95275bb2-9e6a-4c9e-b0cc-9f2eab09471a", 00:09:29.025 "strip_size_kb": 64, 00:09:29.025 "state": "configuring", 00:09:29.025 "raid_level": "concat", 00:09:29.025 "superblock": true, 00:09:29.025 "num_base_bdevs": 3, 00:09:29.025 "num_base_bdevs_discovered": 0, 00:09:29.025 "num_base_bdevs_operational": 3, 00:09:29.025 "base_bdevs_list": [ 00:09:29.025 { 00:09:29.025 "name": "BaseBdev1", 00:09:29.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.025 "is_configured": false, 00:09:29.025 "data_offset": 0, 00:09:29.025 "data_size": 0 00:09:29.025 }, 00:09:29.025 { 00:09:29.025 "name": "BaseBdev2", 00:09:29.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.025 "is_configured": false, 00:09:29.025 "data_offset": 0, 00:09:29.025 "data_size": 0 00:09:29.025 }, 00:09:29.025 { 00:09:29.025 "name": "BaseBdev3", 00:09:29.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.025 "is_configured": false, 00:09:29.025 "data_offset": 0, 00:09:29.025 "data_size": 0 00:09:29.025 } 00:09:29.025 ] 00:09:29.025 }' 00:09:29.025 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.025 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.284 [2024-11-19 10:03:43.477226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.284 [2024-11-19 10:03:43.477277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.284 [2024-11-19 10:03:43.485166] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.284 [2024-11-19 10:03:43.485255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.284 [2024-11-19 10:03:43.485272] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.284 [2024-11-19 10:03:43.485287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.284 [2024-11-19 10:03:43.485297] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.284 [2024-11-19 10:03:43.485313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.284 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 [2024-11-19 10:03:43.535301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.544 BaseBdev1 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 [ 00:09:29.544 { 00:09:29.544 "name": "BaseBdev1", 00:09:29.544 "aliases": [ 00:09:29.544 "db73fc37-4160-438c-a615-43bf2618d58d" 00:09:29.544 ], 00:09:29.544 "product_name": "Malloc disk", 00:09:29.544 "block_size": 512, 00:09:29.544 "num_blocks": 65536, 00:09:29.544 "uuid": "db73fc37-4160-438c-a615-43bf2618d58d", 00:09:29.544 "assigned_rate_limits": { 00:09:29.544 "rw_ios_per_sec": 0, 00:09:29.544 "rw_mbytes_per_sec": 0, 00:09:29.544 "r_mbytes_per_sec": 0, 00:09:29.544 "w_mbytes_per_sec": 0 00:09:29.544 }, 00:09:29.544 "claimed": true, 00:09:29.544 "claim_type": "exclusive_write", 00:09:29.544 "zoned": false, 00:09:29.544 "supported_io_types": { 00:09:29.544 "read": true, 00:09:29.544 "write": true, 00:09:29.544 "unmap": true, 00:09:29.544 "flush": true, 00:09:29.544 "reset": true, 00:09:29.544 "nvme_admin": false, 00:09:29.544 "nvme_io": false, 00:09:29.544 "nvme_io_md": false, 00:09:29.544 "write_zeroes": true, 00:09:29.544 "zcopy": true, 00:09:29.544 "get_zone_info": false, 00:09:29.544 "zone_management": false, 00:09:29.544 "zone_append": false, 00:09:29.544 "compare": false, 00:09:29.544 "compare_and_write": false, 00:09:29.544 "abort": true, 00:09:29.544 "seek_hole": false, 00:09:29.544 "seek_data": false, 00:09:29.544 "copy": true, 00:09:29.544 "nvme_iov_md": false 00:09:29.544 }, 00:09:29.544 "memory_domains": [ 00:09:29.544 { 00:09:29.544 "dma_device_id": "system", 00:09:29.544 "dma_device_type": 1 00:09:29.544 }, 00:09:29.544 { 00:09:29.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.544 "dma_device_type": 2 00:09:29.544 } 00:09:29.544 ], 00:09:29.544 "driver_specific": {} 00:09:29.544 } 00:09:29.544 ] 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.544 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.545 "name": "Existed_Raid", 00:09:29.545 "uuid": "11edc169-2c24-4eeb-b3ac-325731add497", 00:09:29.545 "strip_size_kb": 64, 00:09:29.545 "state": "configuring", 00:09:29.545 "raid_level": "concat", 00:09:29.545 "superblock": true, 00:09:29.545 "num_base_bdevs": 3, 00:09:29.545 "num_base_bdevs_discovered": 1, 00:09:29.545 "num_base_bdevs_operational": 3, 00:09:29.545 "base_bdevs_list": [ 00:09:29.545 { 00:09:29.545 "name": "BaseBdev1", 00:09:29.545 "uuid": "db73fc37-4160-438c-a615-43bf2618d58d", 00:09:29.545 "is_configured": true, 00:09:29.545 "data_offset": 2048, 00:09:29.545 "data_size": 63488 00:09:29.545 }, 00:09:29.545 { 00:09:29.545 "name": "BaseBdev2", 00:09:29.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.545 "is_configured": false, 00:09:29.545 "data_offset": 0, 00:09:29.545 "data_size": 0 00:09:29.545 }, 00:09:29.545 { 00:09:29.545 "name": "BaseBdev3", 00:09:29.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.545 "is_configured": false, 00:09:29.545 "data_offset": 0, 00:09:29.545 "data_size": 0 00:09:29.545 } 00:09:29.545 ] 00:09:29.545 }' 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.545 10:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.113 [2024-11-19 10:03:44.055512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.113 [2024-11-19 10:03:44.055584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.113 [2024-11-19 10:03:44.063566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.113 [2024-11-19 10:03:44.066513] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.113 [2024-11-19 10:03:44.066580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.113 [2024-11-19 10:03:44.066597] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.113 [2024-11-19 10:03:44.066612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.113 "name": "Existed_Raid", 00:09:30.113 "uuid": "ae7d3112-25a8-4118-bf05-5ee56eadb1e7", 00:09:30.113 "strip_size_kb": 64, 00:09:30.113 "state": "configuring", 00:09:30.113 "raid_level": "concat", 00:09:30.113 "superblock": true, 00:09:30.113 "num_base_bdevs": 3, 00:09:30.113 "num_base_bdevs_discovered": 1, 00:09:30.113 "num_base_bdevs_operational": 3, 00:09:30.113 "base_bdevs_list": [ 00:09:30.113 { 00:09:30.113 "name": "BaseBdev1", 00:09:30.113 "uuid": "db73fc37-4160-438c-a615-43bf2618d58d", 00:09:30.113 "is_configured": true, 00:09:30.113 "data_offset": 2048, 00:09:30.113 "data_size": 63488 00:09:30.113 }, 00:09:30.113 { 00:09:30.113 "name": "BaseBdev2", 00:09:30.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.113 "is_configured": false, 00:09:30.113 "data_offset": 0, 00:09:30.113 "data_size": 0 00:09:30.113 }, 00:09:30.113 { 00:09:30.113 "name": "BaseBdev3", 00:09:30.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.113 "is_configured": false, 00:09:30.113 "data_offset": 0, 00:09:30.113 "data_size": 0 00:09:30.113 } 00:09:30.113 ] 00:09:30.113 }' 00:09:30.113 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.114 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.373 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.373 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.373 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.632 [2024-11-19 10:03:44.651125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.632 BaseBdev2 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.632 [ 00:09:30.632 { 00:09:30.632 "name": "BaseBdev2", 00:09:30.632 "aliases": [ 00:09:30.632 "814533f9-f6fc-4e5a-83dd-07f301bc3dde" 00:09:30.632 ], 00:09:30.632 "product_name": "Malloc disk", 00:09:30.632 "block_size": 512, 00:09:30.632 "num_blocks": 65536, 00:09:30.632 "uuid": "814533f9-f6fc-4e5a-83dd-07f301bc3dde", 00:09:30.632 "assigned_rate_limits": { 00:09:30.632 "rw_ios_per_sec": 0, 00:09:30.632 "rw_mbytes_per_sec": 0, 00:09:30.632 "r_mbytes_per_sec": 0, 00:09:30.632 "w_mbytes_per_sec": 0 00:09:30.632 }, 00:09:30.632 "claimed": true, 00:09:30.632 "claim_type": "exclusive_write", 00:09:30.632 "zoned": false, 00:09:30.632 "supported_io_types": { 00:09:30.632 "read": true, 00:09:30.632 "write": true, 00:09:30.632 "unmap": true, 00:09:30.632 "flush": true, 00:09:30.632 "reset": true, 00:09:30.632 "nvme_admin": false, 00:09:30.632 "nvme_io": false, 00:09:30.632 "nvme_io_md": false, 00:09:30.632 "write_zeroes": true, 00:09:30.632 "zcopy": true, 00:09:30.632 "get_zone_info": false, 00:09:30.632 "zone_management": false, 00:09:30.632 "zone_append": false, 00:09:30.632 "compare": false, 00:09:30.632 "compare_and_write": false, 00:09:30.632 "abort": true, 00:09:30.632 "seek_hole": false, 00:09:30.632 "seek_data": false, 00:09:30.632 "copy": true, 00:09:30.632 "nvme_iov_md": false 00:09:30.632 }, 00:09:30.632 "memory_domains": [ 00:09:30.632 { 00:09:30.632 "dma_device_id": "system", 00:09:30.632 "dma_device_type": 1 00:09:30.632 }, 00:09:30.632 { 00:09:30.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.632 "dma_device_type": 2 00:09:30.632 } 00:09:30.632 ], 00:09:30.632 "driver_specific": {} 00:09:30.632 } 00:09:30.632 ] 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.632 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.632 "name": "Existed_Raid", 00:09:30.632 "uuid": "ae7d3112-25a8-4118-bf05-5ee56eadb1e7", 00:09:30.632 "strip_size_kb": 64, 00:09:30.632 "state": "configuring", 00:09:30.632 "raid_level": "concat", 00:09:30.632 "superblock": true, 00:09:30.632 "num_base_bdevs": 3, 00:09:30.632 "num_base_bdevs_discovered": 2, 00:09:30.632 "num_base_bdevs_operational": 3, 00:09:30.632 "base_bdevs_list": [ 00:09:30.632 { 00:09:30.632 "name": "BaseBdev1", 00:09:30.632 "uuid": "db73fc37-4160-438c-a615-43bf2618d58d", 00:09:30.632 "is_configured": true, 00:09:30.632 "data_offset": 2048, 00:09:30.632 "data_size": 63488 00:09:30.632 }, 00:09:30.633 { 00:09:30.633 "name": "BaseBdev2", 00:09:30.633 "uuid": "814533f9-f6fc-4e5a-83dd-07f301bc3dde", 00:09:30.633 "is_configured": true, 00:09:30.633 "data_offset": 2048, 00:09:30.633 "data_size": 63488 00:09:30.633 }, 00:09:30.633 { 00:09:30.633 "name": "BaseBdev3", 00:09:30.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.633 "is_configured": false, 00:09:30.633 "data_offset": 0, 00:09:30.633 "data_size": 0 00:09:30.633 } 00:09:30.633 ] 00:09:30.633 }' 00:09:30.633 10:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.633 10:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 [2024-11-19 10:03:45.256280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.200 [2024-11-19 10:03:45.256927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.200 [2024-11-19 10:03:45.256965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.200 BaseBdev3 00:09:31.200 [2024-11-19 10:03:45.257322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:31.200 [2024-11-19 10:03:45.257541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.200 [2024-11-19 10:03:45.257559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:31.200 [2024-11-19 10:03:45.257745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 [ 00:09:31.200 { 00:09:31.200 "name": "BaseBdev3", 00:09:31.200 "aliases": [ 00:09:31.200 "a918753e-b65e-4864-9840-82701f41de94" 00:09:31.200 ], 00:09:31.200 "product_name": "Malloc disk", 00:09:31.200 "block_size": 512, 00:09:31.200 "num_blocks": 65536, 00:09:31.200 "uuid": "a918753e-b65e-4864-9840-82701f41de94", 00:09:31.200 "assigned_rate_limits": { 00:09:31.200 "rw_ios_per_sec": 0, 00:09:31.200 "rw_mbytes_per_sec": 0, 00:09:31.200 "r_mbytes_per_sec": 0, 00:09:31.200 "w_mbytes_per_sec": 0 00:09:31.200 }, 00:09:31.200 "claimed": true, 00:09:31.200 "claim_type": "exclusive_write", 00:09:31.200 "zoned": false, 00:09:31.200 "supported_io_types": { 00:09:31.200 "read": true, 00:09:31.200 "write": true, 00:09:31.200 "unmap": true, 00:09:31.200 "flush": true, 00:09:31.200 "reset": true, 00:09:31.200 "nvme_admin": false, 00:09:31.200 "nvme_io": false, 00:09:31.200 "nvme_io_md": false, 00:09:31.200 "write_zeroes": true, 00:09:31.200 "zcopy": true, 00:09:31.200 "get_zone_info": false, 00:09:31.200 "zone_management": false, 00:09:31.200 "zone_append": false, 00:09:31.200 "compare": false, 00:09:31.200 "compare_and_write": false, 00:09:31.200 "abort": true, 00:09:31.200 "seek_hole": false, 00:09:31.200 "seek_data": false, 00:09:31.200 "copy": true, 00:09:31.200 "nvme_iov_md": false 00:09:31.200 }, 00:09:31.200 "memory_domains": [ 00:09:31.200 { 00:09:31.200 "dma_device_id": "system", 00:09:31.200 "dma_device_type": 1 00:09:31.200 }, 00:09:31.200 { 00:09:31.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.200 "dma_device_type": 2 00:09:31.200 } 00:09:31.200 ], 00:09:31.200 "driver_specific": {} 00:09:31.200 } 00:09:31.200 ] 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.200 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.200 "name": "Existed_Raid", 00:09:31.200 "uuid": "ae7d3112-25a8-4118-bf05-5ee56eadb1e7", 00:09:31.200 "strip_size_kb": 64, 00:09:31.200 "state": "online", 00:09:31.200 "raid_level": "concat", 00:09:31.200 "superblock": true, 00:09:31.200 "num_base_bdevs": 3, 00:09:31.200 "num_base_bdevs_discovered": 3, 00:09:31.200 "num_base_bdevs_operational": 3, 00:09:31.200 "base_bdevs_list": [ 00:09:31.200 { 00:09:31.200 "name": "BaseBdev1", 00:09:31.200 "uuid": "db73fc37-4160-438c-a615-43bf2618d58d", 00:09:31.200 "is_configured": true, 00:09:31.200 "data_offset": 2048, 00:09:31.200 "data_size": 63488 00:09:31.200 }, 00:09:31.200 { 00:09:31.200 "name": "BaseBdev2", 00:09:31.201 "uuid": "814533f9-f6fc-4e5a-83dd-07f301bc3dde", 00:09:31.201 "is_configured": true, 00:09:31.201 "data_offset": 2048, 00:09:31.201 "data_size": 63488 00:09:31.201 }, 00:09:31.201 { 00:09:31.201 "name": "BaseBdev3", 00:09:31.201 "uuid": "a918753e-b65e-4864-9840-82701f41de94", 00:09:31.201 "is_configured": true, 00:09:31.201 "data_offset": 2048, 00:09:31.201 "data_size": 63488 00:09:31.201 } 00:09:31.201 ] 00:09:31.201 }' 00:09:31.201 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.201 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.768 [2024-11-19 10:03:45.804886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.768 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.768 "name": "Existed_Raid", 00:09:31.768 "aliases": [ 00:09:31.768 "ae7d3112-25a8-4118-bf05-5ee56eadb1e7" 00:09:31.768 ], 00:09:31.768 "product_name": "Raid Volume", 00:09:31.768 "block_size": 512, 00:09:31.768 "num_blocks": 190464, 00:09:31.768 "uuid": "ae7d3112-25a8-4118-bf05-5ee56eadb1e7", 00:09:31.768 "assigned_rate_limits": { 00:09:31.768 "rw_ios_per_sec": 0, 00:09:31.768 "rw_mbytes_per_sec": 0, 00:09:31.768 "r_mbytes_per_sec": 0, 00:09:31.768 "w_mbytes_per_sec": 0 00:09:31.768 }, 00:09:31.768 "claimed": false, 00:09:31.768 "zoned": false, 00:09:31.768 "supported_io_types": { 00:09:31.768 "read": true, 00:09:31.768 "write": true, 00:09:31.768 "unmap": true, 00:09:31.768 "flush": true, 00:09:31.768 "reset": true, 00:09:31.768 "nvme_admin": false, 00:09:31.768 "nvme_io": false, 00:09:31.768 "nvme_io_md": false, 00:09:31.768 "write_zeroes": true, 00:09:31.768 "zcopy": false, 00:09:31.768 "get_zone_info": false, 00:09:31.768 "zone_management": false, 00:09:31.768 "zone_append": false, 00:09:31.768 "compare": false, 00:09:31.768 "compare_and_write": false, 00:09:31.768 "abort": false, 00:09:31.768 "seek_hole": false, 00:09:31.768 "seek_data": false, 00:09:31.769 "copy": false, 00:09:31.769 "nvme_iov_md": false 00:09:31.769 }, 00:09:31.769 "memory_domains": [ 00:09:31.769 { 00:09:31.769 "dma_device_id": "system", 00:09:31.769 "dma_device_type": 1 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.769 "dma_device_type": 2 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "dma_device_id": "system", 00:09:31.769 "dma_device_type": 1 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.769 "dma_device_type": 2 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "dma_device_id": "system", 00:09:31.769 "dma_device_type": 1 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.769 "dma_device_type": 2 00:09:31.769 } 00:09:31.769 ], 00:09:31.769 "driver_specific": { 00:09:31.769 "raid": { 00:09:31.769 "uuid": "ae7d3112-25a8-4118-bf05-5ee56eadb1e7", 00:09:31.769 "strip_size_kb": 64, 00:09:31.769 "state": "online", 00:09:31.769 "raid_level": "concat", 00:09:31.769 "superblock": true, 00:09:31.769 "num_base_bdevs": 3, 00:09:31.769 "num_base_bdevs_discovered": 3, 00:09:31.769 "num_base_bdevs_operational": 3, 00:09:31.769 "base_bdevs_list": [ 00:09:31.769 { 00:09:31.769 "name": "BaseBdev1", 00:09:31.769 "uuid": "db73fc37-4160-438c-a615-43bf2618d58d", 00:09:31.769 "is_configured": true, 00:09:31.769 "data_offset": 2048, 00:09:31.769 "data_size": 63488 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "name": "BaseBdev2", 00:09:31.769 "uuid": "814533f9-f6fc-4e5a-83dd-07f301bc3dde", 00:09:31.769 "is_configured": true, 00:09:31.769 "data_offset": 2048, 00:09:31.769 "data_size": 63488 00:09:31.769 }, 00:09:31.769 { 00:09:31.769 "name": "BaseBdev3", 00:09:31.769 "uuid": "a918753e-b65e-4864-9840-82701f41de94", 00:09:31.769 "is_configured": true, 00:09:31.769 "data_offset": 2048, 00:09:31.769 "data_size": 63488 00:09:31.769 } 00:09:31.769 ] 00:09:31.769 } 00:09:31.769 } 00:09:31.769 }' 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.769 BaseBdev2 00:09:31.769 BaseBdev3' 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.769 10:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 [2024-11-19 10:03:46.128639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.029 [2024-11-19 10:03:46.128683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.029 [2024-11-19 10:03:46.128758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.029 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.288 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.288 "name": "Existed_Raid", 00:09:32.288 "uuid": "ae7d3112-25a8-4118-bf05-5ee56eadb1e7", 00:09:32.288 "strip_size_kb": 64, 00:09:32.288 "state": "offline", 00:09:32.288 "raid_level": "concat", 00:09:32.288 "superblock": true, 00:09:32.288 "num_base_bdevs": 3, 00:09:32.288 "num_base_bdevs_discovered": 2, 00:09:32.288 "num_base_bdevs_operational": 2, 00:09:32.288 "base_bdevs_list": [ 00:09:32.288 { 00:09:32.288 "name": null, 00:09:32.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.288 "is_configured": false, 00:09:32.288 "data_offset": 0, 00:09:32.288 "data_size": 63488 00:09:32.288 }, 00:09:32.288 { 00:09:32.288 "name": "BaseBdev2", 00:09:32.288 "uuid": "814533f9-f6fc-4e5a-83dd-07f301bc3dde", 00:09:32.288 "is_configured": true, 00:09:32.288 "data_offset": 2048, 00:09:32.288 "data_size": 63488 00:09:32.288 }, 00:09:32.288 { 00:09:32.288 "name": "BaseBdev3", 00:09:32.288 "uuid": "a918753e-b65e-4864-9840-82701f41de94", 00:09:32.288 "is_configured": true, 00:09:32.288 "data_offset": 2048, 00:09:32.288 "data_size": 63488 00:09:32.288 } 00:09:32.288 ] 00:09:32.288 }' 00:09:32.288 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.288 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.548 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.862 [2024-11-19 10:03:46.792498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.862 10:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.862 [2024-11-19 10:03:46.945357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.862 [2024-11-19 10:03:46.945423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.862 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.136 BaseBdev2 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.136 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.136 [ 00:09:33.136 { 00:09:33.136 "name": "BaseBdev2", 00:09:33.136 "aliases": [ 00:09:33.136 "d05aae92-9889-4867-8ee4-f9624664f00b" 00:09:33.136 ], 00:09:33.136 "product_name": "Malloc disk", 00:09:33.137 "block_size": 512, 00:09:33.137 "num_blocks": 65536, 00:09:33.137 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:33.137 "assigned_rate_limits": { 00:09:33.137 "rw_ios_per_sec": 0, 00:09:33.137 "rw_mbytes_per_sec": 0, 00:09:33.137 "r_mbytes_per_sec": 0, 00:09:33.137 "w_mbytes_per_sec": 0 00:09:33.137 }, 00:09:33.137 "claimed": false, 00:09:33.137 "zoned": false, 00:09:33.137 "supported_io_types": { 00:09:33.137 "read": true, 00:09:33.137 "write": true, 00:09:33.137 "unmap": true, 00:09:33.137 "flush": true, 00:09:33.137 "reset": true, 00:09:33.137 "nvme_admin": false, 00:09:33.137 "nvme_io": false, 00:09:33.137 "nvme_io_md": false, 00:09:33.137 "write_zeroes": true, 00:09:33.137 "zcopy": true, 00:09:33.137 "get_zone_info": false, 00:09:33.137 "zone_management": false, 00:09:33.137 "zone_append": false, 00:09:33.137 "compare": false, 00:09:33.137 "compare_and_write": false, 00:09:33.137 "abort": true, 00:09:33.137 "seek_hole": false, 00:09:33.137 "seek_data": false, 00:09:33.137 "copy": true, 00:09:33.137 "nvme_iov_md": false 00:09:33.137 }, 00:09:33.137 "memory_domains": [ 00:09:33.137 { 00:09:33.137 "dma_device_id": "system", 00:09:33.137 "dma_device_type": 1 00:09:33.137 }, 00:09:33.137 { 00:09:33.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.137 "dma_device_type": 2 00:09:33.137 } 00:09:33.137 ], 00:09:33.137 "driver_specific": {} 00:09:33.137 } 00:09:33.137 ] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.137 BaseBdev3 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.137 [ 00:09:33.137 { 00:09:33.137 "name": "BaseBdev3", 00:09:33.137 "aliases": [ 00:09:33.137 "5178ebcf-a484-4599-9fc0-49cd92367c72" 00:09:33.137 ], 00:09:33.137 "product_name": "Malloc disk", 00:09:33.137 "block_size": 512, 00:09:33.137 "num_blocks": 65536, 00:09:33.137 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:33.137 "assigned_rate_limits": { 00:09:33.137 "rw_ios_per_sec": 0, 00:09:33.137 "rw_mbytes_per_sec": 0, 00:09:33.137 "r_mbytes_per_sec": 0, 00:09:33.137 "w_mbytes_per_sec": 0 00:09:33.137 }, 00:09:33.137 "claimed": false, 00:09:33.137 "zoned": false, 00:09:33.137 "supported_io_types": { 00:09:33.137 "read": true, 00:09:33.137 "write": true, 00:09:33.137 "unmap": true, 00:09:33.137 "flush": true, 00:09:33.137 "reset": true, 00:09:33.137 "nvme_admin": false, 00:09:33.137 "nvme_io": false, 00:09:33.137 "nvme_io_md": false, 00:09:33.137 "write_zeroes": true, 00:09:33.137 "zcopy": true, 00:09:33.137 "get_zone_info": false, 00:09:33.137 "zone_management": false, 00:09:33.137 "zone_append": false, 00:09:33.137 "compare": false, 00:09:33.137 "compare_and_write": false, 00:09:33.137 "abort": true, 00:09:33.137 "seek_hole": false, 00:09:33.137 "seek_data": false, 00:09:33.137 "copy": true, 00:09:33.137 "nvme_iov_md": false 00:09:33.137 }, 00:09:33.137 "memory_domains": [ 00:09:33.137 { 00:09:33.137 "dma_device_id": "system", 00:09:33.137 "dma_device_type": 1 00:09:33.137 }, 00:09:33.137 { 00:09:33.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.137 "dma_device_type": 2 00:09:33.137 } 00:09:33.137 ], 00:09:33.137 "driver_specific": {} 00:09:33.137 } 00:09:33.137 ] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.137 [2024-11-19 10:03:47.253424] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.137 [2024-11-19 10:03:47.253484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.137 [2024-11-19 10:03:47.253536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.137 [2024-11-19 10:03:47.256233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.137 "name": "Existed_Raid", 00:09:33.137 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:33.137 "strip_size_kb": 64, 00:09:33.137 "state": "configuring", 00:09:33.137 "raid_level": "concat", 00:09:33.137 "superblock": true, 00:09:33.137 "num_base_bdevs": 3, 00:09:33.137 "num_base_bdevs_discovered": 2, 00:09:33.137 "num_base_bdevs_operational": 3, 00:09:33.137 "base_bdevs_list": [ 00:09:33.137 { 00:09:33.137 "name": "BaseBdev1", 00:09:33.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.137 "is_configured": false, 00:09:33.137 "data_offset": 0, 00:09:33.137 "data_size": 0 00:09:33.137 }, 00:09:33.137 { 00:09:33.137 "name": "BaseBdev2", 00:09:33.137 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:33.137 "is_configured": true, 00:09:33.137 "data_offset": 2048, 00:09:33.137 "data_size": 63488 00:09:33.137 }, 00:09:33.137 { 00:09:33.137 "name": "BaseBdev3", 00:09:33.137 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:33.137 "is_configured": true, 00:09:33.137 "data_offset": 2048, 00:09:33.137 "data_size": 63488 00:09:33.137 } 00:09:33.137 ] 00:09:33.137 }' 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.137 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.706 [2024-11-19 10:03:47.773583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.706 "name": "Existed_Raid", 00:09:33.706 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:33.706 "strip_size_kb": 64, 00:09:33.706 "state": "configuring", 00:09:33.706 "raid_level": "concat", 00:09:33.706 "superblock": true, 00:09:33.706 "num_base_bdevs": 3, 00:09:33.706 "num_base_bdevs_discovered": 1, 00:09:33.706 "num_base_bdevs_operational": 3, 00:09:33.706 "base_bdevs_list": [ 00:09:33.706 { 00:09:33.706 "name": "BaseBdev1", 00:09:33.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.706 "is_configured": false, 00:09:33.706 "data_offset": 0, 00:09:33.706 "data_size": 0 00:09:33.706 }, 00:09:33.706 { 00:09:33.706 "name": null, 00:09:33.706 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:33.706 "is_configured": false, 00:09:33.706 "data_offset": 0, 00:09:33.706 "data_size": 63488 00:09:33.706 }, 00:09:33.706 { 00:09:33.706 "name": "BaseBdev3", 00:09:33.706 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:33.706 "is_configured": true, 00:09:33.706 "data_offset": 2048, 00:09:33.706 "data_size": 63488 00:09:33.706 } 00:09:33.706 ] 00:09:33.706 }' 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.706 10:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.274 [2024-11-19 10:03:48.363023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.274 BaseBdev1 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.274 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.274 [ 00:09:34.274 { 00:09:34.274 "name": "BaseBdev1", 00:09:34.274 "aliases": [ 00:09:34.274 "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69" 00:09:34.274 ], 00:09:34.274 "product_name": "Malloc disk", 00:09:34.274 "block_size": 512, 00:09:34.274 "num_blocks": 65536, 00:09:34.274 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:34.274 "assigned_rate_limits": { 00:09:34.274 "rw_ios_per_sec": 0, 00:09:34.274 "rw_mbytes_per_sec": 0, 00:09:34.274 "r_mbytes_per_sec": 0, 00:09:34.274 "w_mbytes_per_sec": 0 00:09:34.274 }, 00:09:34.274 "claimed": true, 00:09:34.274 "claim_type": "exclusive_write", 00:09:34.274 "zoned": false, 00:09:34.274 "supported_io_types": { 00:09:34.274 "read": true, 00:09:34.274 "write": true, 00:09:34.274 "unmap": true, 00:09:34.274 "flush": true, 00:09:34.274 "reset": true, 00:09:34.274 "nvme_admin": false, 00:09:34.274 "nvme_io": false, 00:09:34.274 "nvme_io_md": false, 00:09:34.274 "write_zeroes": true, 00:09:34.274 "zcopy": true, 00:09:34.274 "get_zone_info": false, 00:09:34.274 "zone_management": false, 00:09:34.274 "zone_append": false, 00:09:34.274 "compare": false, 00:09:34.274 "compare_and_write": false, 00:09:34.274 "abort": true, 00:09:34.274 "seek_hole": false, 00:09:34.274 "seek_data": false, 00:09:34.274 "copy": true, 00:09:34.274 "nvme_iov_md": false 00:09:34.274 }, 00:09:34.274 "memory_domains": [ 00:09:34.275 { 00:09:34.275 "dma_device_id": "system", 00:09:34.275 "dma_device_type": 1 00:09:34.275 }, 00:09:34.275 { 00:09:34.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.275 "dma_device_type": 2 00:09:34.275 } 00:09:34.275 ], 00:09:34.275 "driver_specific": {} 00:09:34.275 } 00:09:34.275 ] 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.275 "name": "Existed_Raid", 00:09:34.275 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:34.275 "strip_size_kb": 64, 00:09:34.275 "state": "configuring", 00:09:34.275 "raid_level": "concat", 00:09:34.275 "superblock": true, 00:09:34.275 "num_base_bdevs": 3, 00:09:34.275 "num_base_bdevs_discovered": 2, 00:09:34.275 "num_base_bdevs_operational": 3, 00:09:34.275 "base_bdevs_list": [ 00:09:34.275 { 00:09:34.275 "name": "BaseBdev1", 00:09:34.275 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:34.275 "is_configured": true, 00:09:34.275 "data_offset": 2048, 00:09:34.275 "data_size": 63488 00:09:34.275 }, 00:09:34.275 { 00:09:34.275 "name": null, 00:09:34.275 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:34.275 "is_configured": false, 00:09:34.275 "data_offset": 0, 00:09:34.275 "data_size": 63488 00:09:34.275 }, 00:09:34.275 { 00:09:34.275 "name": "BaseBdev3", 00:09:34.275 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:34.275 "is_configured": true, 00:09:34.275 "data_offset": 2048, 00:09:34.275 "data_size": 63488 00:09:34.275 } 00:09:34.275 ] 00:09:34.275 }' 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.275 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.843 [2024-11-19 10:03:48.967222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.843 10:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.843 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.843 "name": "Existed_Raid", 00:09:34.843 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:34.843 "strip_size_kb": 64, 00:09:34.843 "state": "configuring", 00:09:34.843 "raid_level": "concat", 00:09:34.843 "superblock": true, 00:09:34.843 "num_base_bdevs": 3, 00:09:34.843 "num_base_bdevs_discovered": 1, 00:09:34.843 "num_base_bdevs_operational": 3, 00:09:34.843 "base_bdevs_list": [ 00:09:34.843 { 00:09:34.843 "name": "BaseBdev1", 00:09:34.843 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:34.843 "is_configured": true, 00:09:34.843 "data_offset": 2048, 00:09:34.843 "data_size": 63488 00:09:34.843 }, 00:09:34.843 { 00:09:34.843 "name": null, 00:09:34.843 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:34.843 "is_configured": false, 00:09:34.843 "data_offset": 0, 00:09:34.843 "data_size": 63488 00:09:34.843 }, 00:09:34.843 { 00:09:34.843 "name": null, 00:09:34.843 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:34.844 "is_configured": false, 00:09:34.844 "data_offset": 0, 00:09:34.844 "data_size": 63488 00:09:34.844 } 00:09:34.844 ] 00:09:34.844 }' 00:09:34.844 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.844 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.415 [2024-11-19 10:03:49.535445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.415 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.415 "name": "Existed_Raid", 00:09:35.415 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:35.415 "strip_size_kb": 64, 00:09:35.415 "state": "configuring", 00:09:35.415 "raid_level": "concat", 00:09:35.415 "superblock": true, 00:09:35.415 "num_base_bdevs": 3, 00:09:35.415 "num_base_bdevs_discovered": 2, 00:09:35.415 "num_base_bdevs_operational": 3, 00:09:35.415 "base_bdevs_list": [ 00:09:35.415 { 00:09:35.415 "name": "BaseBdev1", 00:09:35.415 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:35.415 "is_configured": true, 00:09:35.415 "data_offset": 2048, 00:09:35.415 "data_size": 63488 00:09:35.415 }, 00:09:35.415 { 00:09:35.415 "name": null, 00:09:35.415 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:35.415 "is_configured": false, 00:09:35.415 "data_offset": 0, 00:09:35.415 "data_size": 63488 00:09:35.415 }, 00:09:35.415 { 00:09:35.415 "name": "BaseBdev3", 00:09:35.415 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:35.415 "is_configured": true, 00:09:35.416 "data_offset": 2048, 00:09:35.416 "data_size": 63488 00:09:35.416 } 00:09:35.416 ] 00:09:35.416 }' 00:09:35.416 10:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.416 10:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.984 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.984 [2024-11-19 10:03:50.135649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.243 "name": "Existed_Raid", 00:09:36.243 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:36.243 "strip_size_kb": 64, 00:09:36.243 "state": "configuring", 00:09:36.243 "raid_level": "concat", 00:09:36.243 "superblock": true, 00:09:36.243 "num_base_bdevs": 3, 00:09:36.243 "num_base_bdevs_discovered": 1, 00:09:36.243 "num_base_bdevs_operational": 3, 00:09:36.243 "base_bdevs_list": [ 00:09:36.243 { 00:09:36.243 "name": null, 00:09:36.243 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:36.243 "is_configured": false, 00:09:36.243 "data_offset": 0, 00:09:36.243 "data_size": 63488 00:09:36.243 }, 00:09:36.243 { 00:09:36.243 "name": null, 00:09:36.243 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:36.243 "is_configured": false, 00:09:36.243 "data_offset": 0, 00:09:36.243 "data_size": 63488 00:09:36.243 }, 00:09:36.243 { 00:09:36.243 "name": "BaseBdev3", 00:09:36.243 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:36.243 "is_configured": true, 00:09:36.243 "data_offset": 2048, 00:09:36.243 "data_size": 63488 00:09:36.243 } 00:09:36.243 ] 00:09:36.243 }' 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.243 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 [2024-11-19 10:03:50.795859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.811 "name": "Existed_Raid", 00:09:36.811 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:36.811 "strip_size_kb": 64, 00:09:36.811 "state": "configuring", 00:09:36.811 "raid_level": "concat", 00:09:36.811 "superblock": true, 00:09:36.811 "num_base_bdevs": 3, 00:09:36.811 "num_base_bdevs_discovered": 2, 00:09:36.811 "num_base_bdevs_operational": 3, 00:09:36.811 "base_bdevs_list": [ 00:09:36.811 { 00:09:36.811 "name": null, 00:09:36.811 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:36.811 "is_configured": false, 00:09:36.811 "data_offset": 0, 00:09:36.811 "data_size": 63488 00:09:36.811 }, 00:09:36.811 { 00:09:36.811 "name": "BaseBdev2", 00:09:36.811 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:36.811 "is_configured": true, 00:09:36.811 "data_offset": 2048, 00:09:36.811 "data_size": 63488 00:09:36.811 }, 00:09:36.811 { 00:09:36.811 "name": "BaseBdev3", 00:09:36.811 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:36.811 "is_configured": true, 00:09:36.811 "data_offset": 2048, 00:09:36.811 "data_size": 63488 00:09:36.811 } 00:09:36.811 ] 00:09:36.811 }' 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.811 10:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc35bf6f-8c96-4de0-a4a9-c1eb2430df69 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 [2024-11-19 10:03:51.486313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:37.383 [2024-11-19 10:03:51.486613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:37.383 [2024-11-19 10:03:51.486638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:37.383 [2024-11-19 10:03:51.486969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:37.383 NewBaseBdev 00:09:37.383 [2024-11-19 10:03:51.487173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:37.383 [2024-11-19 10:03:51.487189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:37.383 [2024-11-19 10:03:51.487360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.383 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 [ 00:09:37.383 { 00:09:37.383 "name": "NewBaseBdev", 00:09:37.383 "aliases": [ 00:09:37.383 "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69" 00:09:37.383 ], 00:09:37.383 "product_name": "Malloc disk", 00:09:37.383 "block_size": 512, 00:09:37.383 "num_blocks": 65536, 00:09:37.383 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:37.383 "assigned_rate_limits": { 00:09:37.384 "rw_ios_per_sec": 0, 00:09:37.384 "rw_mbytes_per_sec": 0, 00:09:37.384 "r_mbytes_per_sec": 0, 00:09:37.384 "w_mbytes_per_sec": 0 00:09:37.384 }, 00:09:37.384 "claimed": true, 00:09:37.384 "claim_type": "exclusive_write", 00:09:37.384 "zoned": false, 00:09:37.384 "supported_io_types": { 00:09:37.384 "read": true, 00:09:37.384 "write": true, 00:09:37.384 "unmap": true, 00:09:37.384 "flush": true, 00:09:37.384 "reset": true, 00:09:37.384 "nvme_admin": false, 00:09:37.384 "nvme_io": false, 00:09:37.384 "nvme_io_md": false, 00:09:37.384 "write_zeroes": true, 00:09:37.384 "zcopy": true, 00:09:37.384 "get_zone_info": false, 00:09:37.384 "zone_management": false, 00:09:37.384 "zone_append": false, 00:09:37.384 "compare": false, 00:09:37.384 "compare_and_write": false, 00:09:37.384 "abort": true, 00:09:37.384 "seek_hole": false, 00:09:37.384 "seek_data": false, 00:09:37.384 "copy": true, 00:09:37.384 "nvme_iov_md": false 00:09:37.384 }, 00:09:37.384 "memory_domains": [ 00:09:37.384 { 00:09:37.384 "dma_device_id": "system", 00:09:37.384 "dma_device_type": 1 00:09:37.384 }, 00:09:37.384 { 00:09:37.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.384 "dma_device_type": 2 00:09:37.384 } 00:09:37.384 ], 00:09:37.384 "driver_specific": {} 00:09:37.384 } 00:09:37.384 ] 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.384 "name": "Existed_Raid", 00:09:37.384 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:37.384 "strip_size_kb": 64, 00:09:37.384 "state": "online", 00:09:37.384 "raid_level": "concat", 00:09:37.384 "superblock": true, 00:09:37.384 "num_base_bdevs": 3, 00:09:37.384 "num_base_bdevs_discovered": 3, 00:09:37.384 "num_base_bdevs_operational": 3, 00:09:37.384 "base_bdevs_list": [ 00:09:37.384 { 00:09:37.384 "name": "NewBaseBdev", 00:09:37.384 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:37.384 "is_configured": true, 00:09:37.384 "data_offset": 2048, 00:09:37.384 "data_size": 63488 00:09:37.384 }, 00:09:37.384 { 00:09:37.384 "name": "BaseBdev2", 00:09:37.384 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:37.384 "is_configured": true, 00:09:37.384 "data_offset": 2048, 00:09:37.384 "data_size": 63488 00:09:37.384 }, 00:09:37.384 { 00:09:37.384 "name": "BaseBdev3", 00:09:37.384 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:37.384 "is_configured": true, 00:09:37.384 "data_offset": 2048, 00:09:37.384 "data_size": 63488 00:09:37.384 } 00:09:37.384 ] 00:09:37.384 }' 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.384 10:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.951 [2024-11-19 10:03:52.054978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.951 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.951 "name": "Existed_Raid", 00:09:37.951 "aliases": [ 00:09:37.951 "56032b88-a142-4996-a6b7-f99985956263" 00:09:37.951 ], 00:09:37.951 "product_name": "Raid Volume", 00:09:37.951 "block_size": 512, 00:09:37.951 "num_blocks": 190464, 00:09:37.951 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:37.951 "assigned_rate_limits": { 00:09:37.951 "rw_ios_per_sec": 0, 00:09:37.951 "rw_mbytes_per_sec": 0, 00:09:37.951 "r_mbytes_per_sec": 0, 00:09:37.951 "w_mbytes_per_sec": 0 00:09:37.951 }, 00:09:37.951 "claimed": false, 00:09:37.951 "zoned": false, 00:09:37.951 "supported_io_types": { 00:09:37.951 "read": true, 00:09:37.951 "write": true, 00:09:37.951 "unmap": true, 00:09:37.951 "flush": true, 00:09:37.951 "reset": true, 00:09:37.951 "nvme_admin": false, 00:09:37.951 "nvme_io": false, 00:09:37.951 "nvme_io_md": false, 00:09:37.951 "write_zeroes": true, 00:09:37.951 "zcopy": false, 00:09:37.951 "get_zone_info": false, 00:09:37.951 "zone_management": false, 00:09:37.951 "zone_append": false, 00:09:37.951 "compare": false, 00:09:37.951 "compare_and_write": false, 00:09:37.951 "abort": false, 00:09:37.951 "seek_hole": false, 00:09:37.951 "seek_data": false, 00:09:37.951 "copy": false, 00:09:37.951 "nvme_iov_md": false 00:09:37.951 }, 00:09:37.951 "memory_domains": [ 00:09:37.951 { 00:09:37.951 "dma_device_id": "system", 00:09:37.951 "dma_device_type": 1 00:09:37.951 }, 00:09:37.951 { 00:09:37.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.951 "dma_device_type": 2 00:09:37.951 }, 00:09:37.951 { 00:09:37.951 "dma_device_id": "system", 00:09:37.951 "dma_device_type": 1 00:09:37.951 }, 00:09:37.951 { 00:09:37.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.951 "dma_device_type": 2 00:09:37.951 }, 00:09:37.951 { 00:09:37.951 "dma_device_id": "system", 00:09:37.951 "dma_device_type": 1 00:09:37.951 }, 00:09:37.951 { 00:09:37.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.951 "dma_device_type": 2 00:09:37.951 } 00:09:37.951 ], 00:09:37.951 "driver_specific": { 00:09:37.951 "raid": { 00:09:37.951 "uuid": "56032b88-a142-4996-a6b7-f99985956263", 00:09:37.951 "strip_size_kb": 64, 00:09:37.951 "state": "online", 00:09:37.951 "raid_level": "concat", 00:09:37.951 "superblock": true, 00:09:37.951 "num_base_bdevs": 3, 00:09:37.951 "num_base_bdevs_discovered": 3, 00:09:37.951 "num_base_bdevs_operational": 3, 00:09:37.951 "base_bdevs_list": [ 00:09:37.951 { 00:09:37.951 "name": "NewBaseBdev", 00:09:37.952 "uuid": "cc35bf6f-8c96-4de0-a4a9-c1eb2430df69", 00:09:37.952 "is_configured": true, 00:09:37.952 "data_offset": 2048, 00:09:37.952 "data_size": 63488 00:09:37.952 }, 00:09:37.952 { 00:09:37.952 "name": "BaseBdev2", 00:09:37.952 "uuid": "d05aae92-9889-4867-8ee4-f9624664f00b", 00:09:37.952 "is_configured": true, 00:09:37.952 "data_offset": 2048, 00:09:37.952 "data_size": 63488 00:09:37.952 }, 00:09:37.952 { 00:09:37.952 "name": "BaseBdev3", 00:09:37.952 "uuid": "5178ebcf-a484-4599-9fc0-49cd92367c72", 00:09:37.952 "is_configured": true, 00:09:37.952 "data_offset": 2048, 00:09:37.952 "data_size": 63488 00:09:37.952 } 00:09:37.952 ] 00:09:37.952 } 00:09:37.952 } 00:09:37.952 }' 00:09:37.952 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.952 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.952 BaseBdev2 00:09:37.952 BaseBdev3' 00:09:37.952 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.211 [2024-11-19 10:03:52.398667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.211 [2024-11-19 10:03:52.398706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.211 [2024-11-19 10:03:52.399004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.211 [2024-11-19 10:03:52.399201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.211 [2024-11-19 10:03:52.399236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66127 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66127 ']' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66127 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66127 00:09:38.211 killing process with pid 66127 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66127' 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66127 00:09:38.211 [2024-11-19 10:03:52.438613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.211 10:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66127 00:09:38.785 [2024-11-19 10:03:52.733979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.733 10:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:39.733 00:09:39.733 real 0m12.206s 00:09:39.733 user 0m20.082s 00:09:39.733 sys 0m1.698s 00:09:39.733 10:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.733 10:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.733 ************************************ 00:09:39.733 END TEST raid_state_function_test_sb 00:09:39.733 ************************************ 00:09:39.992 10:03:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:39.992 10:03:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.992 10:03:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.992 10:03:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.992 ************************************ 00:09:39.992 START TEST raid_superblock_test 00:09:39.992 ************************************ 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66764 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66764 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66764 ']' 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.992 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.993 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.993 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.993 10:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.993 [2024-11-19 10:03:54.089589] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:39.993 [2024-11-19 10:03:54.090071] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66764 ] 00:09:40.251 [2024-11-19 10:03:54.266976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.251 [2024-11-19 10:03:54.417735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.509 [2024-11-19 10:03:54.650833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.509 [2024-11-19 10:03:54.650923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.077 malloc1 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.077 [2024-11-19 10:03:55.129998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.077 [2024-11-19 10:03:55.130084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.077 [2024-11-19 10:03:55.130118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:41.077 [2024-11-19 10:03:55.130139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.077 [2024-11-19 10:03:55.133362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.077 [2024-11-19 10:03:55.133408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.077 pt1 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.077 malloc2 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.077 [2024-11-19 10:03:55.191876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.077 [2024-11-19 10:03:55.192110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.077 [2024-11-19 10:03:55.192154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:41.077 [2024-11-19 10:03:55.192170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.077 [2024-11-19 10:03:55.195459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.077 [2024-11-19 10:03:55.195622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.077 pt2 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.077 malloc3 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.077 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.078 [2024-11-19 10:03:55.262505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.078 [2024-11-19 10:03:55.262591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.078 [2024-11-19 10:03:55.262624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:41.078 [2024-11-19 10:03:55.262640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.078 [2024-11-19 10:03:55.265743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.078 [2024-11-19 10:03:55.265800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.078 pt3 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.078 [2024-11-19 10:03:55.270669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.078 [2024-11-19 10:03:55.273429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.078 [2024-11-19 10:03:55.273538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.078 [2024-11-19 10:03:55.273760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:41.078 [2024-11-19 10:03:55.273837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.078 [2024-11-19 10:03:55.274221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.078 [2024-11-19 10:03:55.274486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:41.078 [2024-11-19 10:03:55.274503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:41.078 [2024-11-19 10:03:55.274805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.078 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.337 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.337 "name": "raid_bdev1", 00:09:41.337 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:41.337 "strip_size_kb": 64, 00:09:41.337 "state": "online", 00:09:41.337 "raid_level": "concat", 00:09:41.337 "superblock": true, 00:09:41.337 "num_base_bdevs": 3, 00:09:41.337 "num_base_bdevs_discovered": 3, 00:09:41.337 "num_base_bdevs_operational": 3, 00:09:41.337 "base_bdevs_list": [ 00:09:41.337 { 00:09:41.337 "name": "pt1", 00:09:41.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.337 "is_configured": true, 00:09:41.337 "data_offset": 2048, 00:09:41.337 "data_size": 63488 00:09:41.337 }, 00:09:41.337 { 00:09:41.337 "name": "pt2", 00:09:41.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.337 "is_configured": true, 00:09:41.337 "data_offset": 2048, 00:09:41.337 "data_size": 63488 00:09:41.337 }, 00:09:41.337 { 00:09:41.337 "name": "pt3", 00:09:41.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.337 "is_configured": true, 00:09:41.337 "data_offset": 2048, 00:09:41.337 "data_size": 63488 00:09:41.337 } 00:09:41.337 ] 00:09:41.337 }' 00:09:41.337 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.337 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.596 [2024-11-19 10:03:55.803398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.596 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.855 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.855 "name": "raid_bdev1", 00:09:41.855 "aliases": [ 00:09:41.855 "8a479b44-4c4f-4135-8ea6-beae18adceec" 00:09:41.855 ], 00:09:41.855 "product_name": "Raid Volume", 00:09:41.855 "block_size": 512, 00:09:41.855 "num_blocks": 190464, 00:09:41.855 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:41.855 "assigned_rate_limits": { 00:09:41.855 "rw_ios_per_sec": 0, 00:09:41.855 "rw_mbytes_per_sec": 0, 00:09:41.855 "r_mbytes_per_sec": 0, 00:09:41.855 "w_mbytes_per_sec": 0 00:09:41.855 }, 00:09:41.855 "claimed": false, 00:09:41.855 "zoned": false, 00:09:41.855 "supported_io_types": { 00:09:41.855 "read": true, 00:09:41.855 "write": true, 00:09:41.855 "unmap": true, 00:09:41.855 "flush": true, 00:09:41.855 "reset": true, 00:09:41.855 "nvme_admin": false, 00:09:41.855 "nvme_io": false, 00:09:41.855 "nvme_io_md": false, 00:09:41.855 "write_zeroes": true, 00:09:41.855 "zcopy": false, 00:09:41.855 "get_zone_info": false, 00:09:41.855 "zone_management": false, 00:09:41.855 "zone_append": false, 00:09:41.855 "compare": false, 00:09:41.855 "compare_and_write": false, 00:09:41.855 "abort": false, 00:09:41.855 "seek_hole": false, 00:09:41.855 "seek_data": false, 00:09:41.855 "copy": false, 00:09:41.855 "nvme_iov_md": false 00:09:41.855 }, 00:09:41.855 "memory_domains": [ 00:09:41.855 { 00:09:41.855 "dma_device_id": "system", 00:09:41.855 "dma_device_type": 1 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.855 "dma_device_type": 2 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "dma_device_id": "system", 00:09:41.855 "dma_device_type": 1 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.855 "dma_device_type": 2 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "dma_device_id": "system", 00:09:41.855 "dma_device_type": 1 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.855 "dma_device_type": 2 00:09:41.855 } 00:09:41.855 ], 00:09:41.855 "driver_specific": { 00:09:41.856 "raid": { 00:09:41.856 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:41.856 "strip_size_kb": 64, 00:09:41.856 "state": "online", 00:09:41.856 "raid_level": "concat", 00:09:41.856 "superblock": true, 00:09:41.856 "num_base_bdevs": 3, 00:09:41.856 "num_base_bdevs_discovered": 3, 00:09:41.856 "num_base_bdevs_operational": 3, 00:09:41.856 "base_bdevs_list": [ 00:09:41.856 { 00:09:41.856 "name": "pt1", 00:09:41.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "name": "pt2", 00:09:41.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 }, 00:09:41.856 { 00:09:41.856 "name": "pt3", 00:09:41.856 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.856 "is_configured": true, 00:09:41.856 "data_offset": 2048, 00:09:41.856 "data_size": 63488 00:09:41.856 } 00:09:41.856 ] 00:09:41.856 } 00:09:41.856 } 00:09:41.856 }' 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:41.856 pt2 00:09:41.856 pt3' 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 10:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.856 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:42.115 [2024-11-19 10:03:56.115377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8a479b44-4c4f-4135-8ea6-beae18adceec 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8a479b44-4c4f-4135-8ea6-beae18adceec ']' 00:09:42.115 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 [2024-11-19 10:03:56.163136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.116 [2024-11-19 10:03:56.163343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.116 [2024-11-19 10:03:56.163561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.116 [2024-11-19 10:03:56.163760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.116 [2024-11-19 10:03:56.163931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 [2024-11-19 10:03:56.311253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:42.116 [2024-11-19 10:03:56.314039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:42.116 [2024-11-19 10:03:56.314262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:42.116 [2024-11-19 10:03:56.314351] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:42.116 [2024-11-19 10:03:56.314425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:42.116 [2024-11-19 10:03:56.314457] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:42.116 [2024-11-19 10:03:56.314484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.116 [2024-11-19 10:03:56.314497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:42.116 request: 00:09:42.116 { 00:09:42.116 "name": "raid_bdev1", 00:09:42.116 "raid_level": "concat", 00:09:42.116 "base_bdevs": [ 00:09:42.116 "malloc1", 00:09:42.116 "malloc2", 00:09:42.116 "malloc3" 00:09:42.116 ], 00:09:42.116 "strip_size_kb": 64, 00:09:42.116 "superblock": false, 00:09:42.116 "method": "bdev_raid_create", 00:09:42.116 "req_id": 1 00:09:42.116 } 00:09:42.116 Got JSON-RPC error response 00:09:42.116 response: 00:09:42.116 { 00:09:42.116 "code": -17, 00:09:42.116 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:42.116 } 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 [2024-11-19 10:03:56.379272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.376 [2024-11-19 10:03:56.379463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.376 [2024-11-19 10:03:56.379538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:42.376 [2024-11-19 10:03:56.379733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.376 [2024-11-19 10:03:56.382943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.376 [2024-11-19 10:03:56.383102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.376 [2024-11-19 10:03:56.383308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:42.376 [2024-11-19 10:03:56.383492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.376 pt1 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.376 "name": "raid_bdev1", 00:09:42.376 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:42.376 "strip_size_kb": 64, 00:09:42.376 "state": "configuring", 00:09:42.376 "raid_level": "concat", 00:09:42.376 "superblock": true, 00:09:42.376 "num_base_bdevs": 3, 00:09:42.376 "num_base_bdevs_discovered": 1, 00:09:42.376 "num_base_bdevs_operational": 3, 00:09:42.376 "base_bdevs_list": [ 00:09:42.376 { 00:09:42.376 "name": "pt1", 00:09:42.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.376 "is_configured": true, 00:09:42.376 "data_offset": 2048, 00:09:42.376 "data_size": 63488 00:09:42.376 }, 00:09:42.376 { 00:09:42.376 "name": null, 00:09:42.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.376 "is_configured": false, 00:09:42.376 "data_offset": 2048, 00:09:42.376 "data_size": 63488 00:09:42.376 }, 00:09:42.376 { 00:09:42.376 "name": null, 00:09:42.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.376 "is_configured": false, 00:09:42.376 "data_offset": 2048, 00:09:42.376 "data_size": 63488 00:09:42.376 } 00:09:42.376 ] 00:09:42.376 }' 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.376 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.945 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:42.945 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.945 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.945 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.945 [2024-11-19 10:03:56.935605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.945 [2024-11-19 10:03:56.935706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.945 [2024-11-19 10:03:56.935747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:42.945 [2024-11-19 10:03:56.935763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.945 [2024-11-19 10:03:56.936432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.945 [2024-11-19 10:03:56.936466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.945 [2024-11-19 10:03:56.936587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.946 [2024-11-19 10:03:56.936621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.946 pt2 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 [2024-11-19 10:03:56.943589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.946 10:03:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.946 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.946 "name": "raid_bdev1", 00:09:42.946 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:42.946 "strip_size_kb": 64, 00:09:42.946 "state": "configuring", 00:09:42.946 "raid_level": "concat", 00:09:42.946 "superblock": true, 00:09:42.946 "num_base_bdevs": 3, 00:09:42.946 "num_base_bdevs_discovered": 1, 00:09:42.946 "num_base_bdevs_operational": 3, 00:09:42.946 "base_bdevs_list": [ 00:09:42.946 { 00:09:42.946 "name": "pt1", 00:09:42.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.946 "is_configured": true, 00:09:42.946 "data_offset": 2048, 00:09:42.946 "data_size": 63488 00:09:42.946 }, 00:09:42.946 { 00:09:42.946 "name": null, 00:09:42.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.946 "is_configured": false, 00:09:42.946 "data_offset": 0, 00:09:42.946 "data_size": 63488 00:09:42.946 }, 00:09:42.946 { 00:09:42.946 "name": null, 00:09:42.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.946 "is_configured": false, 00:09:42.946 "data_offset": 2048, 00:09:42.946 "data_size": 63488 00:09:42.946 } 00:09:42.946 ] 00:09:42.946 }' 00:09:42.946 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.946 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.515 [2024-11-19 10:03:57.471726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.515 [2024-11-19 10:03:57.471858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.515 [2024-11-19 10:03:57.471891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:43.515 [2024-11-19 10:03:57.471909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.515 [2024-11-19 10:03:57.472579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.515 [2024-11-19 10:03:57.472617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.515 [2024-11-19 10:03:57.472753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.515 [2024-11-19 10:03:57.472806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.515 pt2 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.515 [2024-11-19 10:03:57.479663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.515 [2024-11-19 10:03:57.479732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.515 [2024-11-19 10:03:57.479752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:43.515 [2024-11-19 10:03:57.479783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.515 [2024-11-19 10:03:57.480283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.515 [2024-11-19 10:03:57.480324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.515 [2024-11-19 10:03:57.480399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.515 [2024-11-19 10:03:57.480431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.515 [2024-11-19 10:03:57.480603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.515 [2024-11-19 10:03:57.480623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.515 [2024-11-19 10:03:57.480973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:43.515 [2024-11-19 10:03:57.481173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.515 [2024-11-19 10:03:57.481203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:43.515 [2024-11-19 10:03:57.481393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.515 pt3 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.515 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.515 "name": "raid_bdev1", 00:09:43.515 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:43.515 "strip_size_kb": 64, 00:09:43.515 "state": "online", 00:09:43.515 "raid_level": "concat", 00:09:43.515 "superblock": true, 00:09:43.515 "num_base_bdevs": 3, 00:09:43.515 "num_base_bdevs_discovered": 3, 00:09:43.515 "num_base_bdevs_operational": 3, 00:09:43.515 "base_bdevs_list": [ 00:09:43.515 { 00:09:43.515 "name": "pt1", 00:09:43.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.515 "is_configured": true, 00:09:43.515 "data_offset": 2048, 00:09:43.515 "data_size": 63488 00:09:43.515 }, 00:09:43.515 { 00:09:43.515 "name": "pt2", 00:09:43.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.516 "is_configured": true, 00:09:43.516 "data_offset": 2048, 00:09:43.516 "data_size": 63488 00:09:43.516 }, 00:09:43.516 { 00:09:43.516 "name": "pt3", 00:09:43.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.516 "is_configured": true, 00:09:43.516 "data_offset": 2048, 00:09:43.516 "data_size": 63488 00:09:43.516 } 00:09:43.516 ] 00:09:43.516 }' 00:09:43.516 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.516 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.775 10:03:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.775 [2024-11-19 10:03:57.992327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.034 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.034 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.034 "name": "raid_bdev1", 00:09:44.034 "aliases": [ 00:09:44.034 "8a479b44-4c4f-4135-8ea6-beae18adceec" 00:09:44.034 ], 00:09:44.034 "product_name": "Raid Volume", 00:09:44.034 "block_size": 512, 00:09:44.034 "num_blocks": 190464, 00:09:44.034 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:44.034 "assigned_rate_limits": { 00:09:44.034 "rw_ios_per_sec": 0, 00:09:44.034 "rw_mbytes_per_sec": 0, 00:09:44.034 "r_mbytes_per_sec": 0, 00:09:44.034 "w_mbytes_per_sec": 0 00:09:44.034 }, 00:09:44.034 "claimed": false, 00:09:44.034 "zoned": false, 00:09:44.034 "supported_io_types": { 00:09:44.034 "read": true, 00:09:44.034 "write": true, 00:09:44.034 "unmap": true, 00:09:44.034 "flush": true, 00:09:44.034 "reset": true, 00:09:44.034 "nvme_admin": false, 00:09:44.034 "nvme_io": false, 00:09:44.034 "nvme_io_md": false, 00:09:44.034 "write_zeroes": true, 00:09:44.034 "zcopy": false, 00:09:44.034 "get_zone_info": false, 00:09:44.034 "zone_management": false, 00:09:44.034 "zone_append": false, 00:09:44.034 "compare": false, 00:09:44.034 "compare_and_write": false, 00:09:44.034 "abort": false, 00:09:44.034 "seek_hole": false, 00:09:44.034 "seek_data": false, 00:09:44.034 "copy": false, 00:09:44.034 "nvme_iov_md": false 00:09:44.034 }, 00:09:44.034 "memory_domains": [ 00:09:44.034 { 00:09:44.034 "dma_device_id": "system", 00:09:44.034 "dma_device_type": 1 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.034 "dma_device_type": 2 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "dma_device_id": "system", 00:09:44.034 "dma_device_type": 1 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.034 "dma_device_type": 2 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "dma_device_id": "system", 00:09:44.034 "dma_device_type": 1 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.034 "dma_device_type": 2 00:09:44.034 } 00:09:44.034 ], 00:09:44.034 "driver_specific": { 00:09:44.034 "raid": { 00:09:44.034 "uuid": "8a479b44-4c4f-4135-8ea6-beae18adceec", 00:09:44.034 "strip_size_kb": 64, 00:09:44.034 "state": "online", 00:09:44.034 "raid_level": "concat", 00:09:44.034 "superblock": true, 00:09:44.034 "num_base_bdevs": 3, 00:09:44.034 "num_base_bdevs_discovered": 3, 00:09:44.034 "num_base_bdevs_operational": 3, 00:09:44.034 "base_bdevs_list": [ 00:09:44.034 { 00:09:44.034 "name": "pt1", 00:09:44.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.034 "is_configured": true, 00:09:44.034 "data_offset": 2048, 00:09:44.034 "data_size": 63488 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "name": "pt2", 00:09:44.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.034 "is_configured": true, 00:09:44.034 "data_offset": 2048, 00:09:44.034 "data_size": 63488 00:09:44.034 }, 00:09:44.034 { 00:09:44.034 "name": "pt3", 00:09:44.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.034 "is_configured": true, 00:09:44.035 "data_offset": 2048, 00:09:44.035 "data_size": 63488 00:09:44.035 } 00:09:44.035 ] 00:09:44.035 } 00:09:44.035 } 00:09:44.035 }' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.035 pt2 00:09:44.035 pt3' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.035 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.294 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.294 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.294 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.294 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.295 [2024-11-19 10:03:58.316395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8a479b44-4c4f-4135-8ea6-beae18adceec '!=' 8a479b44-4c4f-4135-8ea6-beae18adceec ']' 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66764 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66764 ']' 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66764 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66764 00:09:44.295 killing process with pid 66764 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66764' 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66764 00:09:44.295 [2024-11-19 10:03:58.391698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.295 10:03:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66764 00:09:44.295 [2024-11-19 10:03:58.391868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.295 [2024-11-19 10:03:58.391988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.295 [2024-11-19 10:03:58.392024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:44.554 [2024-11-19 10:03:58.681302] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.933 10:03:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:45.933 00:09:45.933 real 0m5.793s 00:09:45.933 user 0m8.610s 00:09:45.933 sys 0m0.899s 00:09:45.933 10:03:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.933 10:03:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.933 ************************************ 00:09:45.933 END TEST raid_superblock_test 00:09:45.933 ************************************ 00:09:45.933 10:03:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:45.933 10:03:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:45.933 10:03:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.933 10:03:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.933 ************************************ 00:09:45.933 START TEST raid_read_error_test 00:09:45.933 ************************************ 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fqrZS7XU9V 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67018 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67018 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67018 ']' 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.933 10:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.933 [2024-11-19 10:03:59.961995] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:45.933 [2024-11-19 10:03:59.962398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67018 ] 00:09:45.933 [2024-11-19 10:04:00.146416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.192 [2024-11-19 10:04:00.311555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.451 [2024-11-19 10:04:00.536580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.451 [2024-11-19 10:04:00.536661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.019 10:04:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.019 10:04:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.019 10:04:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.019 10:04:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:47.019 10:04:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.019 10:04:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.019 BaseBdev1_malloc 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.019 true 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.019 [2024-11-19 10:04:01.048783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:47.019 [2024-11-19 10:04:01.048912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.019 [2024-11-19 10:04:01.048959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:47.019 [2024-11-19 10:04:01.048979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.019 [2024-11-19 10:04:01.052062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.019 [2024-11-19 10:04:01.052262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:47.019 BaseBdev1 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.019 BaseBdev2_malloc 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.019 true 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.019 [2024-11-19 10:04:01.118924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:47.019 [2024-11-19 10:04:01.119178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.019 [2024-11-19 10:04:01.119216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:47.019 [2024-11-19 10:04:01.119236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.019 [2024-11-19 10:04:01.122364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.019 [2024-11-19 10:04:01.122540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:47.019 BaseBdev2 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.019 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.020 BaseBdev3_malloc 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.020 true 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.020 [2024-11-19 10:04:01.191617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:47.020 [2024-11-19 10:04:01.191701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.020 [2024-11-19 10:04:01.191728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:47.020 [2024-11-19 10:04:01.191746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.020 [2024-11-19 10:04:01.194744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.020 [2024-11-19 10:04:01.194816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:47.020 BaseBdev3 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.020 [2024-11-19 10:04:01.199756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.020 [2024-11-19 10:04:01.202488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.020 [2024-11-19 10:04:01.202766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.020 [2024-11-19 10:04:01.203112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.020 [2024-11-19 10:04:01.203134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:47.020 [2024-11-19 10:04:01.203492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:47.020 [2024-11-19 10:04:01.203707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.020 [2024-11-19 10:04:01.203730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:47.020 [2024-11-19 10:04:01.204001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.020 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.278 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.278 "name": "raid_bdev1", 00:09:47.278 "uuid": "803f4697-146f-4c40-a4cc-ea6bbc0eae33", 00:09:47.278 "strip_size_kb": 64, 00:09:47.278 "state": "online", 00:09:47.278 "raid_level": "concat", 00:09:47.278 "superblock": true, 00:09:47.278 "num_base_bdevs": 3, 00:09:47.278 "num_base_bdevs_discovered": 3, 00:09:47.278 "num_base_bdevs_operational": 3, 00:09:47.278 "base_bdevs_list": [ 00:09:47.278 { 00:09:47.278 "name": "BaseBdev1", 00:09:47.278 "uuid": "15161f58-574b-5ec2-8c5d-226412906d63", 00:09:47.278 "is_configured": true, 00:09:47.278 "data_offset": 2048, 00:09:47.278 "data_size": 63488 00:09:47.278 }, 00:09:47.278 { 00:09:47.278 "name": "BaseBdev2", 00:09:47.278 "uuid": "18b00759-9b95-59a3-b3b1-cec818302624", 00:09:47.278 "is_configured": true, 00:09:47.278 "data_offset": 2048, 00:09:47.278 "data_size": 63488 00:09:47.278 }, 00:09:47.278 { 00:09:47.278 "name": "BaseBdev3", 00:09:47.278 "uuid": "729cc9db-8c8c-53fe-b4dd-223d2368f845", 00:09:47.278 "is_configured": true, 00:09:47.278 "data_offset": 2048, 00:09:47.278 "data_size": 63488 00:09:47.278 } 00:09:47.278 ] 00:09:47.278 }' 00:09:47.278 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.279 10:04:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.552 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:47.552 10:04:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:47.821 [2024-11-19 10:04:01.869731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.758 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.759 "name": "raid_bdev1", 00:09:48.759 "uuid": "803f4697-146f-4c40-a4cc-ea6bbc0eae33", 00:09:48.759 "strip_size_kb": 64, 00:09:48.759 "state": "online", 00:09:48.759 "raid_level": "concat", 00:09:48.759 "superblock": true, 00:09:48.759 "num_base_bdevs": 3, 00:09:48.759 "num_base_bdevs_discovered": 3, 00:09:48.759 "num_base_bdevs_operational": 3, 00:09:48.759 "base_bdevs_list": [ 00:09:48.759 { 00:09:48.759 "name": "BaseBdev1", 00:09:48.759 "uuid": "15161f58-574b-5ec2-8c5d-226412906d63", 00:09:48.759 "is_configured": true, 00:09:48.759 "data_offset": 2048, 00:09:48.759 "data_size": 63488 00:09:48.759 }, 00:09:48.759 { 00:09:48.759 "name": "BaseBdev2", 00:09:48.759 "uuid": "18b00759-9b95-59a3-b3b1-cec818302624", 00:09:48.759 "is_configured": true, 00:09:48.759 "data_offset": 2048, 00:09:48.759 "data_size": 63488 00:09:48.759 }, 00:09:48.759 { 00:09:48.759 "name": "BaseBdev3", 00:09:48.759 "uuid": "729cc9db-8c8c-53fe-b4dd-223d2368f845", 00:09:48.759 "is_configured": true, 00:09:48.759 "data_offset": 2048, 00:09:48.759 "data_size": 63488 00:09:48.759 } 00:09:48.759 ] 00:09:48.759 }' 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.759 10:04:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.327 [2024-11-19 10:04:03.277134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.327 [2024-11-19 10:04:03.277172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.327 [2024-11-19 10:04:03.280814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.327 [2024-11-19 10:04:03.280877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.327 [2024-11-19 10:04:03.280968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.327 [2024-11-19 10:04:03.280990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:49.327 { 00:09:49.327 "results": [ 00:09:49.327 { 00:09:49.327 "job": "raid_bdev1", 00:09:49.327 "core_mask": "0x1", 00:09:49.327 "workload": "randrw", 00:09:49.327 "percentage": 50, 00:09:49.327 "status": "finished", 00:09:49.327 "queue_depth": 1, 00:09:49.327 "io_size": 131072, 00:09:49.327 "runtime": 1.404646, 00:09:49.327 "iops": 9459.322847180001, 00:09:49.327 "mibps": 1182.4153558975001, 00:09:49.327 "io_failed": 1, 00:09:49.327 "io_timeout": 0, 00:09:49.327 "avg_latency_us": 148.5203524711291, 00:09:49.327 "min_latency_us": 39.79636363636364, 00:09:49.327 "max_latency_us": 1921.3963636363637 00:09:49.327 } 00:09:49.327 ], 00:09:49.327 "core_count": 1 00:09:49.327 } 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67018 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67018 ']' 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67018 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67018 00:09:49.327 killing process with pid 67018 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67018' 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67018 00:09:49.327 [2024-11-19 10:04:03.320203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.327 10:04:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67018 00:09:49.327 [2024-11-19 10:04:03.546847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fqrZS7XU9V 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:50.706 ************************************ 00:09:50.706 END TEST raid_read_error_test 00:09:50.706 ************************************ 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:50.706 00:09:50.706 real 0m4.912s 00:09:50.706 user 0m6.013s 00:09:50.706 sys 0m0.687s 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.706 10:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.706 10:04:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:50.706 10:04:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.706 10:04:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.706 10:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.706 ************************************ 00:09:50.706 START TEST raid_write_error_test 00:09:50.706 ************************************ 00:09:50.706 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:50.706 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MRBsg6x3i4 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67169 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67169 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67169 ']' 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.707 10:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.966 [2024-11-19 10:04:04.944400] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:50.966 [2024-11-19 10:04:04.945223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67169 ] 00:09:50.966 [2024-11-19 10:04:05.136696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.226 [2024-11-19 10:04:05.297932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.485 [2024-11-19 10:04:05.537655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.485 [2024-11-19 10:04:05.537758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.744 10:04:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.744 10:04:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.744 10:04:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.744 10:04:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.744 10:04:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.744 10:04:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.003 BaseBdev1_malloc 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.003 true 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.003 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.003 [2024-11-19 10:04:06.022841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.004 [2024-11-19 10:04:06.022917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.004 [2024-11-19 10:04:06.022947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.004 [2024-11-19 10:04:06.022965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.004 [2024-11-19 10:04:06.026019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.004 [2024-11-19 10:04:06.026070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.004 BaseBdev1 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 BaseBdev2_malloc 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 true 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 [2024-11-19 10:04:06.088382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.004 [2024-11-19 10:04:06.088467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.004 [2024-11-19 10:04:06.088492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.004 [2024-11-19 10:04:06.088508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.004 [2024-11-19 10:04:06.091562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.004 [2024-11-19 10:04:06.091608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.004 BaseBdev2 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 BaseBdev3_malloc 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 true 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 [2024-11-19 10:04:06.178293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.004 [2024-11-19 10:04:06.178365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.004 [2024-11-19 10:04:06.178393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.004 [2024-11-19 10:04:06.178426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.004 [2024-11-19 10:04:06.181589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.004 [2024-11-19 10:04:06.181654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.004 BaseBdev3 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 [2024-11-19 10:04:06.186562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.004 [2024-11-19 10:04:06.189317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.004 [2024-11-19 10:04:06.189438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.004 [2024-11-19 10:04:06.189712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.004 [2024-11-19 10:04:06.189746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.004 [2024-11-19 10:04:06.190144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:52.004 [2024-11-19 10:04:06.190370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.004 [2024-11-19 10:04:06.190394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:52.004 [2024-11-19 10:04:06.190625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.004 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.263 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.263 "name": "raid_bdev1", 00:09:52.263 "uuid": "f0e1cdd2-f532-490e-84fb-78eec5e9bc0c", 00:09:52.263 "strip_size_kb": 64, 00:09:52.263 "state": "online", 00:09:52.263 "raid_level": "concat", 00:09:52.263 "superblock": true, 00:09:52.263 "num_base_bdevs": 3, 00:09:52.263 "num_base_bdevs_discovered": 3, 00:09:52.263 "num_base_bdevs_operational": 3, 00:09:52.263 "base_bdevs_list": [ 00:09:52.263 { 00:09:52.263 "name": "BaseBdev1", 00:09:52.263 "uuid": "64151ac4-761c-5909-8e37-479831268331", 00:09:52.263 "is_configured": true, 00:09:52.263 "data_offset": 2048, 00:09:52.263 "data_size": 63488 00:09:52.263 }, 00:09:52.263 { 00:09:52.263 "name": "BaseBdev2", 00:09:52.263 "uuid": "7416eaa6-371c-5297-8647-ecae30a14771", 00:09:52.263 "is_configured": true, 00:09:52.263 "data_offset": 2048, 00:09:52.263 "data_size": 63488 00:09:52.263 }, 00:09:52.263 { 00:09:52.263 "name": "BaseBdev3", 00:09:52.263 "uuid": "84799d57-42c2-5e93-9a89-27f15ef460b9", 00:09:52.263 "is_configured": true, 00:09:52.263 "data_offset": 2048, 00:09:52.263 "data_size": 63488 00:09:52.263 } 00:09:52.263 ] 00:09:52.263 }' 00:09:52.263 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.263 10:04:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.527 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.527 10:04:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.810 [2024-11-19 10:04:06.872340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:53.745 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.746 "name": "raid_bdev1", 00:09:53.746 "uuid": "f0e1cdd2-f532-490e-84fb-78eec5e9bc0c", 00:09:53.746 "strip_size_kb": 64, 00:09:53.746 "state": "online", 00:09:53.746 "raid_level": "concat", 00:09:53.746 "superblock": true, 00:09:53.746 "num_base_bdevs": 3, 00:09:53.746 "num_base_bdevs_discovered": 3, 00:09:53.746 "num_base_bdevs_operational": 3, 00:09:53.746 "base_bdevs_list": [ 00:09:53.746 { 00:09:53.746 "name": "BaseBdev1", 00:09:53.746 "uuid": "64151ac4-761c-5909-8e37-479831268331", 00:09:53.746 "is_configured": true, 00:09:53.746 "data_offset": 2048, 00:09:53.746 "data_size": 63488 00:09:53.746 }, 00:09:53.746 { 00:09:53.746 "name": "BaseBdev2", 00:09:53.746 "uuid": "7416eaa6-371c-5297-8647-ecae30a14771", 00:09:53.746 "is_configured": true, 00:09:53.746 "data_offset": 2048, 00:09:53.746 "data_size": 63488 00:09:53.746 }, 00:09:53.746 { 00:09:53.746 "name": "BaseBdev3", 00:09:53.746 "uuid": "84799d57-42c2-5e93-9a89-27f15ef460b9", 00:09:53.746 "is_configured": true, 00:09:53.746 "data_offset": 2048, 00:09:53.746 "data_size": 63488 00:09:53.746 } 00:09:53.746 ] 00:09:53.746 }' 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.746 10:04:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.313 [2024-11-19 10:04:08.299436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.313 [2024-11-19 10:04:08.299617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.313 [2024-11-19 10:04:08.303132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.313 [2024-11-19 10:04:08.303316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.313 [2024-11-19 10:04:08.303424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:09:54.313 "results": [ 00:09:54.313 { 00:09:54.313 "job": "raid_bdev1", 00:09:54.313 "core_mask": "0x1", 00:09:54.313 "workload": "randrw", 00:09:54.313 "percentage": 50, 00:09:54.313 "status": "finished", 00:09:54.313 "queue_depth": 1, 00:09:54.313 "io_size": 131072, 00:09:54.313 "runtime": 1.424515, 00:09:54.313 "iops": 9330.895076569921, 00:09:54.313 "mibps": 1166.3618845712401, 00:09:54.313 "io_failed": 1, 00:09:54.313 "io_timeout": 0, 00:09:54.313 "avg_latency_us": 150.81610526387777, 00:09:54.313 "min_latency_us": 40.02909090909091, 00:09:54.313 "max_latency_us": 1936.290909090909 00:09:54.313 } 00:09:54.313 ], 00:09:54.313 "core_count": 1 00:09:54.313 } 00:09:54.313 ee all in destruct 00:09:54.313 [2024-11-19 10:04:08.303582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67169 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67169 ']' 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67169 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67169 00:09:54.313 killing process with pid 67169 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67169' 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67169 00:09:54.313 [2024-11-19 10:04:08.339070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.313 10:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67169 00:09:54.571 [2024-11-19 10:04:08.577519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MRBsg6x3i4 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:55.944 00:09:55.944 real 0m4.996s 00:09:55.944 user 0m6.153s 00:09:55.944 sys 0m0.670s 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.944 10:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.944 ************************************ 00:09:55.944 END TEST raid_write_error_test 00:09:55.944 ************************************ 00:09:55.944 10:04:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:55.944 10:04:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:55.944 10:04:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.944 10:04:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.944 10:04:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.944 ************************************ 00:09:55.944 START TEST raid_state_function_test 00:09:55.944 ************************************ 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.944 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:55.944 Process raid pid: 67313 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67313 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67313' 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67313 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67313 ']' 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.945 10:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.945 [2024-11-19 10:04:09.975217] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:09:55.945 [2024-11-19 10:04:09.975744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.945 [2024-11-19 10:04:10.166171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.231 [2024-11-19 10:04:10.334701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.490 [2024-11-19 10:04:10.575815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.490 [2024-11-19 10:04:10.575889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.749 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.749 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.749 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.749 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.749 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.749 [2024-11-19 10:04:10.976818] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.749 [2024-11-19 10:04:10.976900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.749 [2024-11-19 10:04:10.976918] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.749 [2024-11-19 10:04:10.976936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.749 [2024-11-19 10:04:10.976946] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.749 [2024-11-19 10:04:10.976962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.007 10:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.008 10:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.008 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.008 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.008 "name": "Existed_Raid", 00:09:57.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.008 "strip_size_kb": 0, 00:09:57.008 "state": "configuring", 00:09:57.008 "raid_level": "raid1", 00:09:57.008 "superblock": false, 00:09:57.008 "num_base_bdevs": 3, 00:09:57.008 "num_base_bdevs_discovered": 0, 00:09:57.008 "num_base_bdevs_operational": 3, 00:09:57.008 "base_bdevs_list": [ 00:09:57.008 { 00:09:57.008 "name": "BaseBdev1", 00:09:57.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.008 "is_configured": false, 00:09:57.008 "data_offset": 0, 00:09:57.008 "data_size": 0 00:09:57.008 }, 00:09:57.008 { 00:09:57.008 "name": "BaseBdev2", 00:09:57.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.008 "is_configured": false, 00:09:57.008 "data_offset": 0, 00:09:57.008 "data_size": 0 00:09:57.008 }, 00:09:57.008 { 00:09:57.008 "name": "BaseBdev3", 00:09:57.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.008 "is_configured": false, 00:09:57.008 "data_offset": 0, 00:09:57.008 "data_size": 0 00:09:57.008 } 00:09:57.008 ] 00:09:57.008 }' 00:09:57.008 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.008 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.267 [2024-11-19 10:04:11.472914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.267 [2024-11-19 10:04:11.472965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.267 [2024-11-19 10:04:11.480872] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.267 [2024-11-19 10:04:11.480936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.267 [2024-11-19 10:04:11.480953] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.267 [2024-11-19 10:04:11.480970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.267 [2024-11-19 10:04:11.480980] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.267 [2024-11-19 10:04:11.480995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.267 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.526 [2024-11-19 10:04:11.529984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.526 BaseBdev1 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.526 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.527 [ 00:09:57.527 { 00:09:57.527 "name": "BaseBdev1", 00:09:57.527 "aliases": [ 00:09:57.527 "f2569858-526b-4be8-8084-65a7c10a54af" 00:09:57.527 ], 00:09:57.527 "product_name": "Malloc disk", 00:09:57.527 "block_size": 512, 00:09:57.527 "num_blocks": 65536, 00:09:57.527 "uuid": "f2569858-526b-4be8-8084-65a7c10a54af", 00:09:57.527 "assigned_rate_limits": { 00:09:57.527 "rw_ios_per_sec": 0, 00:09:57.527 "rw_mbytes_per_sec": 0, 00:09:57.527 "r_mbytes_per_sec": 0, 00:09:57.527 "w_mbytes_per_sec": 0 00:09:57.527 }, 00:09:57.527 "claimed": true, 00:09:57.527 "claim_type": "exclusive_write", 00:09:57.527 "zoned": false, 00:09:57.527 "supported_io_types": { 00:09:57.527 "read": true, 00:09:57.527 "write": true, 00:09:57.527 "unmap": true, 00:09:57.527 "flush": true, 00:09:57.527 "reset": true, 00:09:57.527 "nvme_admin": false, 00:09:57.527 "nvme_io": false, 00:09:57.527 "nvme_io_md": false, 00:09:57.527 "write_zeroes": true, 00:09:57.527 "zcopy": true, 00:09:57.527 "get_zone_info": false, 00:09:57.527 "zone_management": false, 00:09:57.527 "zone_append": false, 00:09:57.527 "compare": false, 00:09:57.527 "compare_and_write": false, 00:09:57.527 "abort": true, 00:09:57.527 "seek_hole": false, 00:09:57.527 "seek_data": false, 00:09:57.527 "copy": true, 00:09:57.527 "nvme_iov_md": false 00:09:57.527 }, 00:09:57.527 "memory_domains": [ 00:09:57.527 { 00:09:57.527 "dma_device_id": "system", 00:09:57.527 "dma_device_type": 1 00:09:57.527 }, 00:09:57.527 { 00:09:57.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.527 "dma_device_type": 2 00:09:57.527 } 00:09:57.527 ], 00:09:57.527 "driver_specific": {} 00:09:57.527 } 00:09:57.527 ] 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.527 "name": "Existed_Raid", 00:09:57.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.527 "strip_size_kb": 0, 00:09:57.527 "state": "configuring", 00:09:57.527 "raid_level": "raid1", 00:09:57.527 "superblock": false, 00:09:57.527 "num_base_bdevs": 3, 00:09:57.527 "num_base_bdevs_discovered": 1, 00:09:57.527 "num_base_bdevs_operational": 3, 00:09:57.527 "base_bdevs_list": [ 00:09:57.527 { 00:09:57.527 "name": "BaseBdev1", 00:09:57.527 "uuid": "f2569858-526b-4be8-8084-65a7c10a54af", 00:09:57.527 "is_configured": true, 00:09:57.527 "data_offset": 0, 00:09:57.527 "data_size": 65536 00:09:57.527 }, 00:09:57.527 { 00:09:57.527 "name": "BaseBdev2", 00:09:57.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.527 "is_configured": false, 00:09:57.527 "data_offset": 0, 00:09:57.527 "data_size": 0 00:09:57.527 }, 00:09:57.527 { 00:09:57.527 "name": "BaseBdev3", 00:09:57.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.527 "is_configured": false, 00:09:57.527 "data_offset": 0, 00:09:57.527 "data_size": 0 00:09:57.527 } 00:09:57.527 ] 00:09:57.527 }' 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.527 10:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.095 [2024-11-19 10:04:12.066197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.095 [2024-11-19 10:04:12.066421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.095 [2024-11-19 10:04:12.074214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.095 [2024-11-19 10:04:12.077099] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.095 [2024-11-19 10:04:12.077272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.095 [2024-11-19 10:04:12.077393] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.095 [2024-11-19 10:04:12.077453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.095 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.095 "name": "Existed_Raid", 00:09:58.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.095 "strip_size_kb": 0, 00:09:58.095 "state": "configuring", 00:09:58.095 "raid_level": "raid1", 00:09:58.095 "superblock": false, 00:09:58.095 "num_base_bdevs": 3, 00:09:58.095 "num_base_bdevs_discovered": 1, 00:09:58.095 "num_base_bdevs_operational": 3, 00:09:58.095 "base_bdevs_list": [ 00:09:58.095 { 00:09:58.095 "name": "BaseBdev1", 00:09:58.095 "uuid": "f2569858-526b-4be8-8084-65a7c10a54af", 00:09:58.095 "is_configured": true, 00:09:58.095 "data_offset": 0, 00:09:58.095 "data_size": 65536 00:09:58.095 }, 00:09:58.095 { 00:09:58.095 "name": "BaseBdev2", 00:09:58.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.095 "is_configured": false, 00:09:58.095 "data_offset": 0, 00:09:58.095 "data_size": 0 00:09:58.095 }, 00:09:58.095 { 00:09:58.095 "name": "BaseBdev3", 00:09:58.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.096 "is_configured": false, 00:09:58.096 "data_offset": 0, 00:09:58.096 "data_size": 0 00:09:58.096 } 00:09:58.096 ] 00:09:58.096 }' 00:09:58.096 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.096 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.663 [2024-11-19 10:04:12.628753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.663 BaseBdev2 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.663 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.663 [ 00:09:58.663 { 00:09:58.663 "name": "BaseBdev2", 00:09:58.663 "aliases": [ 00:09:58.663 "927a256a-98b6-49d2-9c58-de0744ab3526" 00:09:58.663 ], 00:09:58.663 "product_name": "Malloc disk", 00:09:58.663 "block_size": 512, 00:09:58.663 "num_blocks": 65536, 00:09:58.663 "uuid": "927a256a-98b6-49d2-9c58-de0744ab3526", 00:09:58.663 "assigned_rate_limits": { 00:09:58.663 "rw_ios_per_sec": 0, 00:09:58.663 "rw_mbytes_per_sec": 0, 00:09:58.663 "r_mbytes_per_sec": 0, 00:09:58.663 "w_mbytes_per_sec": 0 00:09:58.663 }, 00:09:58.663 "claimed": true, 00:09:58.663 "claim_type": "exclusive_write", 00:09:58.663 "zoned": false, 00:09:58.663 "supported_io_types": { 00:09:58.663 "read": true, 00:09:58.663 "write": true, 00:09:58.663 "unmap": true, 00:09:58.663 "flush": true, 00:09:58.663 "reset": true, 00:09:58.664 "nvme_admin": false, 00:09:58.664 "nvme_io": false, 00:09:58.664 "nvme_io_md": false, 00:09:58.664 "write_zeroes": true, 00:09:58.664 "zcopy": true, 00:09:58.664 "get_zone_info": false, 00:09:58.664 "zone_management": false, 00:09:58.664 "zone_append": false, 00:09:58.664 "compare": false, 00:09:58.664 "compare_and_write": false, 00:09:58.664 "abort": true, 00:09:58.664 "seek_hole": false, 00:09:58.664 "seek_data": false, 00:09:58.664 "copy": true, 00:09:58.664 "nvme_iov_md": false 00:09:58.664 }, 00:09:58.664 "memory_domains": [ 00:09:58.664 { 00:09:58.664 "dma_device_id": "system", 00:09:58.664 "dma_device_type": 1 00:09:58.664 }, 00:09:58.664 { 00:09:58.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.664 "dma_device_type": 2 00:09:58.664 } 00:09:58.664 ], 00:09:58.664 "driver_specific": {} 00:09:58.664 } 00:09:58.664 ] 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.664 "name": "Existed_Raid", 00:09:58.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.664 "strip_size_kb": 0, 00:09:58.664 "state": "configuring", 00:09:58.664 "raid_level": "raid1", 00:09:58.664 "superblock": false, 00:09:58.664 "num_base_bdevs": 3, 00:09:58.664 "num_base_bdevs_discovered": 2, 00:09:58.664 "num_base_bdevs_operational": 3, 00:09:58.664 "base_bdevs_list": [ 00:09:58.664 { 00:09:58.664 "name": "BaseBdev1", 00:09:58.664 "uuid": "f2569858-526b-4be8-8084-65a7c10a54af", 00:09:58.664 "is_configured": true, 00:09:58.664 "data_offset": 0, 00:09:58.664 "data_size": 65536 00:09:58.664 }, 00:09:58.664 { 00:09:58.664 "name": "BaseBdev2", 00:09:58.664 "uuid": "927a256a-98b6-49d2-9c58-de0744ab3526", 00:09:58.664 "is_configured": true, 00:09:58.664 "data_offset": 0, 00:09:58.664 "data_size": 65536 00:09:58.664 }, 00:09:58.664 { 00:09:58.664 "name": "BaseBdev3", 00:09:58.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.664 "is_configured": false, 00:09:58.664 "data_offset": 0, 00:09:58.664 "data_size": 0 00:09:58.664 } 00:09:58.664 ] 00:09:58.664 }' 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.664 10:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.922 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.922 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.181 [2024-11-19 10:04:13.199150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.181 [2024-11-19 10:04:13.199519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.181 [2024-11-19 10:04:13.199555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:59.181 [2024-11-19 10:04:13.199969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:59.181 [2024-11-19 10:04:13.200226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.181 [2024-11-19 10:04:13.200244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:59.181 BaseBdev3 00:09:59.181 [2024-11-19 10:04:13.200773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.181 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.182 [ 00:09:59.182 { 00:09:59.182 "name": "BaseBdev3", 00:09:59.182 "aliases": [ 00:09:59.182 "9044d7e8-46ef-497b-aeab-37531b2fde9c" 00:09:59.182 ], 00:09:59.182 "product_name": "Malloc disk", 00:09:59.182 "block_size": 512, 00:09:59.182 "num_blocks": 65536, 00:09:59.182 "uuid": "9044d7e8-46ef-497b-aeab-37531b2fde9c", 00:09:59.182 "assigned_rate_limits": { 00:09:59.182 "rw_ios_per_sec": 0, 00:09:59.182 "rw_mbytes_per_sec": 0, 00:09:59.182 "r_mbytes_per_sec": 0, 00:09:59.182 "w_mbytes_per_sec": 0 00:09:59.182 }, 00:09:59.182 "claimed": true, 00:09:59.182 "claim_type": "exclusive_write", 00:09:59.182 "zoned": false, 00:09:59.182 "supported_io_types": { 00:09:59.182 "read": true, 00:09:59.182 "write": true, 00:09:59.182 "unmap": true, 00:09:59.182 "flush": true, 00:09:59.182 "reset": true, 00:09:59.182 "nvme_admin": false, 00:09:59.182 "nvme_io": false, 00:09:59.182 "nvme_io_md": false, 00:09:59.182 "write_zeroes": true, 00:09:59.182 "zcopy": true, 00:09:59.182 "get_zone_info": false, 00:09:59.182 "zone_management": false, 00:09:59.182 "zone_append": false, 00:09:59.182 "compare": false, 00:09:59.182 "compare_and_write": false, 00:09:59.182 "abort": true, 00:09:59.182 "seek_hole": false, 00:09:59.182 "seek_data": false, 00:09:59.182 "copy": true, 00:09:59.182 "nvme_iov_md": false 00:09:59.182 }, 00:09:59.182 "memory_domains": [ 00:09:59.182 { 00:09:59.182 "dma_device_id": "system", 00:09:59.182 "dma_device_type": 1 00:09:59.182 }, 00:09:59.182 { 00:09:59.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.182 "dma_device_type": 2 00:09:59.182 } 00:09:59.182 ], 00:09:59.182 "driver_specific": {} 00:09:59.182 } 00:09:59.182 ] 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.182 "name": "Existed_Raid", 00:09:59.182 "uuid": "81f33444-80ad-45c8-aaf2-3d9a162f0aa3", 00:09:59.182 "strip_size_kb": 0, 00:09:59.182 "state": "online", 00:09:59.182 "raid_level": "raid1", 00:09:59.182 "superblock": false, 00:09:59.182 "num_base_bdevs": 3, 00:09:59.182 "num_base_bdevs_discovered": 3, 00:09:59.182 "num_base_bdevs_operational": 3, 00:09:59.182 "base_bdevs_list": [ 00:09:59.182 { 00:09:59.182 "name": "BaseBdev1", 00:09:59.182 "uuid": "f2569858-526b-4be8-8084-65a7c10a54af", 00:09:59.182 "is_configured": true, 00:09:59.182 "data_offset": 0, 00:09:59.182 "data_size": 65536 00:09:59.182 }, 00:09:59.182 { 00:09:59.182 "name": "BaseBdev2", 00:09:59.182 "uuid": "927a256a-98b6-49d2-9c58-de0744ab3526", 00:09:59.182 "is_configured": true, 00:09:59.182 "data_offset": 0, 00:09:59.182 "data_size": 65536 00:09:59.182 }, 00:09:59.182 { 00:09:59.182 "name": "BaseBdev3", 00:09:59.182 "uuid": "9044d7e8-46ef-497b-aeab-37531b2fde9c", 00:09:59.182 "is_configured": true, 00:09:59.182 "data_offset": 0, 00:09:59.182 "data_size": 65536 00:09:59.182 } 00:09:59.182 ] 00:09:59.182 }' 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.182 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.750 [2024-11-19 10:04:13.755812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.750 "name": "Existed_Raid", 00:09:59.750 "aliases": [ 00:09:59.750 "81f33444-80ad-45c8-aaf2-3d9a162f0aa3" 00:09:59.750 ], 00:09:59.750 "product_name": "Raid Volume", 00:09:59.750 "block_size": 512, 00:09:59.750 "num_blocks": 65536, 00:09:59.750 "uuid": "81f33444-80ad-45c8-aaf2-3d9a162f0aa3", 00:09:59.750 "assigned_rate_limits": { 00:09:59.750 "rw_ios_per_sec": 0, 00:09:59.750 "rw_mbytes_per_sec": 0, 00:09:59.750 "r_mbytes_per_sec": 0, 00:09:59.750 "w_mbytes_per_sec": 0 00:09:59.750 }, 00:09:59.750 "claimed": false, 00:09:59.750 "zoned": false, 00:09:59.750 "supported_io_types": { 00:09:59.750 "read": true, 00:09:59.750 "write": true, 00:09:59.750 "unmap": false, 00:09:59.750 "flush": false, 00:09:59.750 "reset": true, 00:09:59.750 "nvme_admin": false, 00:09:59.750 "nvme_io": false, 00:09:59.750 "nvme_io_md": false, 00:09:59.750 "write_zeroes": true, 00:09:59.750 "zcopy": false, 00:09:59.750 "get_zone_info": false, 00:09:59.750 "zone_management": false, 00:09:59.750 "zone_append": false, 00:09:59.750 "compare": false, 00:09:59.750 "compare_and_write": false, 00:09:59.750 "abort": false, 00:09:59.750 "seek_hole": false, 00:09:59.750 "seek_data": false, 00:09:59.750 "copy": false, 00:09:59.750 "nvme_iov_md": false 00:09:59.750 }, 00:09:59.750 "memory_domains": [ 00:09:59.750 { 00:09:59.750 "dma_device_id": "system", 00:09:59.750 "dma_device_type": 1 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.750 "dma_device_type": 2 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "dma_device_id": "system", 00:09:59.750 "dma_device_type": 1 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.750 "dma_device_type": 2 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "dma_device_id": "system", 00:09:59.750 "dma_device_type": 1 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.750 "dma_device_type": 2 00:09:59.750 } 00:09:59.750 ], 00:09:59.750 "driver_specific": { 00:09:59.750 "raid": { 00:09:59.750 "uuid": "81f33444-80ad-45c8-aaf2-3d9a162f0aa3", 00:09:59.750 "strip_size_kb": 0, 00:09:59.750 "state": "online", 00:09:59.750 "raid_level": "raid1", 00:09:59.750 "superblock": false, 00:09:59.750 "num_base_bdevs": 3, 00:09:59.750 "num_base_bdevs_discovered": 3, 00:09:59.750 "num_base_bdevs_operational": 3, 00:09:59.750 "base_bdevs_list": [ 00:09:59.750 { 00:09:59.750 "name": "BaseBdev1", 00:09:59.750 "uuid": "f2569858-526b-4be8-8084-65a7c10a54af", 00:09:59.750 "is_configured": true, 00:09:59.750 "data_offset": 0, 00:09:59.750 "data_size": 65536 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "name": "BaseBdev2", 00:09:59.750 "uuid": "927a256a-98b6-49d2-9c58-de0744ab3526", 00:09:59.750 "is_configured": true, 00:09:59.750 "data_offset": 0, 00:09:59.750 "data_size": 65536 00:09:59.750 }, 00:09:59.750 { 00:09:59.750 "name": "BaseBdev3", 00:09:59.750 "uuid": "9044d7e8-46ef-497b-aeab-37531b2fde9c", 00:09:59.750 "is_configured": true, 00:09:59.750 "data_offset": 0, 00:09:59.750 "data_size": 65536 00:09:59.750 } 00:09:59.750 ] 00:09:59.750 } 00:09:59.750 } 00:09:59.750 }' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.750 BaseBdev2 00:09:59.750 BaseBdev3' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.750 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.010 10:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.010 [2024-11-19 10:04:14.083566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.010 "name": "Existed_Raid", 00:10:00.010 "uuid": "81f33444-80ad-45c8-aaf2-3d9a162f0aa3", 00:10:00.010 "strip_size_kb": 0, 00:10:00.010 "state": "online", 00:10:00.010 "raid_level": "raid1", 00:10:00.010 "superblock": false, 00:10:00.010 "num_base_bdevs": 3, 00:10:00.010 "num_base_bdevs_discovered": 2, 00:10:00.010 "num_base_bdevs_operational": 2, 00:10:00.010 "base_bdevs_list": [ 00:10:00.010 { 00:10:00.010 "name": null, 00:10:00.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.010 "is_configured": false, 00:10:00.010 "data_offset": 0, 00:10:00.010 "data_size": 65536 00:10:00.010 }, 00:10:00.010 { 00:10:00.010 "name": "BaseBdev2", 00:10:00.010 "uuid": "927a256a-98b6-49d2-9c58-de0744ab3526", 00:10:00.010 "is_configured": true, 00:10:00.010 "data_offset": 0, 00:10:00.010 "data_size": 65536 00:10:00.010 }, 00:10:00.010 { 00:10:00.010 "name": "BaseBdev3", 00:10:00.010 "uuid": "9044d7e8-46ef-497b-aeab-37531b2fde9c", 00:10:00.010 "is_configured": true, 00:10:00.010 "data_offset": 0, 00:10:00.010 "data_size": 65536 00:10:00.010 } 00:10:00.010 ] 00:10:00.010 }' 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.010 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.576 [2024-11-19 10:04:14.688847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.576 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.835 [2024-11-19 10:04:14.850109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.835 [2024-11-19 10:04:14.850393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.835 [2024-11-19 10:04:14.944773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.835 [2024-11-19 10:04:14.945159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.835 [2024-11-19 10:04:14.945196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.835 10:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.836 BaseBdev2 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.836 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.836 [ 00:10:00.836 { 00:10:00.836 "name": "BaseBdev2", 00:10:00.836 "aliases": [ 00:10:00.836 "7c46df43-357d-4f40-8c25-b419e212d1b9" 00:10:00.836 ], 00:10:00.836 "product_name": "Malloc disk", 00:10:00.836 "block_size": 512, 00:10:00.836 "num_blocks": 65536, 00:10:00.836 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:00.836 "assigned_rate_limits": { 00:10:00.836 "rw_ios_per_sec": 0, 00:10:00.836 "rw_mbytes_per_sec": 0, 00:10:00.836 "r_mbytes_per_sec": 0, 00:10:00.836 "w_mbytes_per_sec": 0 00:10:00.836 }, 00:10:00.836 "claimed": false, 00:10:00.836 "zoned": false, 00:10:00.836 "supported_io_types": { 00:10:00.836 "read": true, 00:10:00.836 "write": true, 00:10:00.836 "unmap": true, 00:10:00.836 "flush": true, 00:10:00.836 "reset": true, 00:10:00.836 "nvme_admin": false, 00:10:00.836 "nvme_io": false, 00:10:00.836 "nvme_io_md": false, 00:10:00.836 "write_zeroes": true, 00:10:00.836 "zcopy": true, 00:10:00.836 "get_zone_info": false, 00:10:00.836 "zone_management": false, 00:10:00.836 "zone_append": false, 00:10:00.836 "compare": false, 00:10:00.836 "compare_and_write": false, 00:10:00.836 "abort": true, 00:10:00.836 "seek_hole": false, 00:10:00.836 "seek_data": false, 00:10:00.836 "copy": true, 00:10:00.836 "nvme_iov_md": false 00:10:00.836 }, 00:10:00.836 "memory_domains": [ 00:10:00.836 { 00:10:00.836 "dma_device_id": "system", 00:10:01.094 "dma_device_type": 1 00:10:01.094 }, 00:10:01.094 { 00:10:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.094 "dma_device_type": 2 00:10:01.094 } 00:10:01.094 ], 00:10:01.094 "driver_specific": {} 00:10:01.094 } 00:10:01.094 ] 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.094 BaseBdev3 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.094 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 [ 00:10:01.095 { 00:10:01.095 "name": "BaseBdev3", 00:10:01.095 "aliases": [ 00:10:01.095 "afeb50a5-a482-41a8-808a-c3ca91758ef1" 00:10:01.095 ], 00:10:01.095 "product_name": "Malloc disk", 00:10:01.095 "block_size": 512, 00:10:01.095 "num_blocks": 65536, 00:10:01.095 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:01.095 "assigned_rate_limits": { 00:10:01.095 "rw_ios_per_sec": 0, 00:10:01.095 "rw_mbytes_per_sec": 0, 00:10:01.095 "r_mbytes_per_sec": 0, 00:10:01.095 "w_mbytes_per_sec": 0 00:10:01.095 }, 00:10:01.095 "claimed": false, 00:10:01.095 "zoned": false, 00:10:01.095 "supported_io_types": { 00:10:01.095 "read": true, 00:10:01.095 "write": true, 00:10:01.095 "unmap": true, 00:10:01.095 "flush": true, 00:10:01.095 "reset": true, 00:10:01.095 "nvme_admin": false, 00:10:01.095 "nvme_io": false, 00:10:01.095 "nvme_io_md": false, 00:10:01.095 "write_zeroes": true, 00:10:01.095 "zcopy": true, 00:10:01.095 "get_zone_info": false, 00:10:01.095 "zone_management": false, 00:10:01.095 "zone_append": false, 00:10:01.095 "compare": false, 00:10:01.095 "compare_and_write": false, 00:10:01.095 "abort": true, 00:10:01.095 "seek_hole": false, 00:10:01.095 "seek_data": false, 00:10:01.095 "copy": true, 00:10:01.095 "nvme_iov_md": false 00:10:01.095 }, 00:10:01.095 "memory_domains": [ 00:10:01.095 { 00:10:01.095 "dma_device_id": "system", 00:10:01.095 "dma_device_type": 1 00:10:01.095 }, 00:10:01.095 { 00:10:01.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.095 "dma_device_type": 2 00:10:01.095 } 00:10:01.095 ], 00:10:01.095 "driver_specific": {} 00:10:01.095 } 00:10:01.095 ] 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 [2024-11-19 10:04:15.153274] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.095 [2024-11-19 10:04:15.153479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.095 [2024-11-19 10:04:15.153667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.095 [2024-11-19 10:04:15.156407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.095 "name": "Existed_Raid", 00:10:01.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.095 "strip_size_kb": 0, 00:10:01.095 "state": "configuring", 00:10:01.095 "raid_level": "raid1", 00:10:01.095 "superblock": false, 00:10:01.095 "num_base_bdevs": 3, 00:10:01.095 "num_base_bdevs_discovered": 2, 00:10:01.095 "num_base_bdevs_operational": 3, 00:10:01.095 "base_bdevs_list": [ 00:10:01.095 { 00:10:01.095 "name": "BaseBdev1", 00:10:01.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.095 "is_configured": false, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 0 00:10:01.095 }, 00:10:01.095 { 00:10:01.095 "name": "BaseBdev2", 00:10:01.095 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:01.095 "is_configured": true, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 65536 00:10:01.095 }, 00:10:01.095 { 00:10:01.095 "name": "BaseBdev3", 00:10:01.095 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:01.095 "is_configured": true, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 65536 00:10:01.095 } 00:10:01.095 ] 00:10:01.095 }' 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.095 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.663 [2024-11-19 10:04:15.645399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.663 "name": "Existed_Raid", 00:10:01.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.663 "strip_size_kb": 0, 00:10:01.663 "state": "configuring", 00:10:01.663 "raid_level": "raid1", 00:10:01.663 "superblock": false, 00:10:01.663 "num_base_bdevs": 3, 00:10:01.663 "num_base_bdevs_discovered": 1, 00:10:01.663 "num_base_bdevs_operational": 3, 00:10:01.663 "base_bdevs_list": [ 00:10:01.663 { 00:10:01.663 "name": "BaseBdev1", 00:10:01.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.663 "is_configured": false, 00:10:01.663 "data_offset": 0, 00:10:01.663 "data_size": 0 00:10:01.663 }, 00:10:01.663 { 00:10:01.663 "name": null, 00:10:01.663 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:01.663 "is_configured": false, 00:10:01.663 "data_offset": 0, 00:10:01.663 "data_size": 65536 00:10:01.663 }, 00:10:01.663 { 00:10:01.663 "name": "BaseBdev3", 00:10:01.663 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:01.663 "is_configured": true, 00:10:01.663 "data_offset": 0, 00:10:01.663 "data_size": 65536 00:10:01.663 } 00:10:01.663 ] 00:10:01.663 }' 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.663 10:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.921 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.921 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.921 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.921 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.921 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.182 [2024-11-19 10:04:16.227284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.182 BaseBdev1 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.182 [ 00:10:02.182 { 00:10:02.182 "name": "BaseBdev1", 00:10:02.182 "aliases": [ 00:10:02.182 "9faed237-47c0-43f8-95be-46b6dfb0a7eb" 00:10:02.182 ], 00:10:02.182 "product_name": "Malloc disk", 00:10:02.182 "block_size": 512, 00:10:02.182 "num_blocks": 65536, 00:10:02.182 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:02.182 "assigned_rate_limits": { 00:10:02.182 "rw_ios_per_sec": 0, 00:10:02.182 "rw_mbytes_per_sec": 0, 00:10:02.182 "r_mbytes_per_sec": 0, 00:10:02.182 "w_mbytes_per_sec": 0 00:10:02.182 }, 00:10:02.182 "claimed": true, 00:10:02.182 "claim_type": "exclusive_write", 00:10:02.182 "zoned": false, 00:10:02.182 "supported_io_types": { 00:10:02.182 "read": true, 00:10:02.182 "write": true, 00:10:02.182 "unmap": true, 00:10:02.182 "flush": true, 00:10:02.182 "reset": true, 00:10:02.182 "nvme_admin": false, 00:10:02.182 "nvme_io": false, 00:10:02.182 "nvme_io_md": false, 00:10:02.182 "write_zeroes": true, 00:10:02.182 "zcopy": true, 00:10:02.182 "get_zone_info": false, 00:10:02.182 "zone_management": false, 00:10:02.182 "zone_append": false, 00:10:02.182 "compare": false, 00:10:02.182 "compare_and_write": false, 00:10:02.182 "abort": true, 00:10:02.182 "seek_hole": false, 00:10:02.182 "seek_data": false, 00:10:02.182 "copy": true, 00:10:02.182 "nvme_iov_md": false 00:10:02.182 }, 00:10:02.182 "memory_domains": [ 00:10:02.182 { 00:10:02.182 "dma_device_id": "system", 00:10:02.182 "dma_device_type": 1 00:10:02.182 }, 00:10:02.182 { 00:10:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.182 "dma_device_type": 2 00:10:02.182 } 00:10:02.182 ], 00:10:02.182 "driver_specific": {} 00:10:02.182 } 00:10:02.182 ] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.182 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.183 "name": "Existed_Raid", 00:10:02.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.183 "strip_size_kb": 0, 00:10:02.183 "state": "configuring", 00:10:02.183 "raid_level": "raid1", 00:10:02.183 "superblock": false, 00:10:02.183 "num_base_bdevs": 3, 00:10:02.183 "num_base_bdevs_discovered": 2, 00:10:02.183 "num_base_bdevs_operational": 3, 00:10:02.183 "base_bdevs_list": [ 00:10:02.183 { 00:10:02.183 "name": "BaseBdev1", 00:10:02.183 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:02.183 "is_configured": true, 00:10:02.183 "data_offset": 0, 00:10:02.183 "data_size": 65536 00:10:02.183 }, 00:10:02.183 { 00:10:02.183 "name": null, 00:10:02.183 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:02.183 "is_configured": false, 00:10:02.183 "data_offset": 0, 00:10:02.183 "data_size": 65536 00:10:02.183 }, 00:10:02.183 { 00:10:02.183 "name": "BaseBdev3", 00:10:02.183 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:02.183 "is_configured": true, 00:10:02.183 "data_offset": 0, 00:10:02.183 "data_size": 65536 00:10:02.183 } 00:10:02.183 ] 00:10:02.183 }' 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.183 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.786 [2024-11-19 10:04:16.807492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.786 "name": "Existed_Raid", 00:10:02.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.786 "strip_size_kb": 0, 00:10:02.786 "state": "configuring", 00:10:02.786 "raid_level": "raid1", 00:10:02.786 "superblock": false, 00:10:02.786 "num_base_bdevs": 3, 00:10:02.786 "num_base_bdevs_discovered": 1, 00:10:02.786 "num_base_bdevs_operational": 3, 00:10:02.786 "base_bdevs_list": [ 00:10:02.786 { 00:10:02.786 "name": "BaseBdev1", 00:10:02.786 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:02.786 "is_configured": true, 00:10:02.786 "data_offset": 0, 00:10:02.786 "data_size": 65536 00:10:02.786 }, 00:10:02.786 { 00:10:02.786 "name": null, 00:10:02.786 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:02.786 "is_configured": false, 00:10:02.786 "data_offset": 0, 00:10:02.786 "data_size": 65536 00:10:02.786 }, 00:10:02.786 { 00:10:02.786 "name": null, 00:10:02.786 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:02.786 "is_configured": false, 00:10:02.786 "data_offset": 0, 00:10:02.786 "data_size": 65536 00:10:02.786 } 00:10:02.786 ] 00:10:02.786 }' 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.786 10:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.353 [2024-11-19 10:04:17.403706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.353 "name": "Existed_Raid", 00:10:03.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.353 "strip_size_kb": 0, 00:10:03.353 "state": "configuring", 00:10:03.353 "raid_level": "raid1", 00:10:03.353 "superblock": false, 00:10:03.353 "num_base_bdevs": 3, 00:10:03.353 "num_base_bdevs_discovered": 2, 00:10:03.353 "num_base_bdevs_operational": 3, 00:10:03.353 "base_bdevs_list": [ 00:10:03.353 { 00:10:03.353 "name": "BaseBdev1", 00:10:03.353 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:03.353 "is_configured": true, 00:10:03.353 "data_offset": 0, 00:10:03.353 "data_size": 65536 00:10:03.353 }, 00:10:03.353 { 00:10:03.353 "name": null, 00:10:03.353 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:03.353 "is_configured": false, 00:10:03.353 "data_offset": 0, 00:10:03.353 "data_size": 65536 00:10:03.353 }, 00:10:03.353 { 00:10:03.353 "name": "BaseBdev3", 00:10:03.353 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:03.353 "is_configured": true, 00:10:03.353 "data_offset": 0, 00:10:03.353 "data_size": 65536 00:10:03.353 } 00:10:03.353 ] 00:10:03.353 }' 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.353 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.920 10:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.920 [2024-11-19 10:04:17.947883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.920 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.920 "name": "Existed_Raid", 00:10:03.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.920 "strip_size_kb": 0, 00:10:03.920 "state": "configuring", 00:10:03.920 "raid_level": "raid1", 00:10:03.920 "superblock": false, 00:10:03.920 "num_base_bdevs": 3, 00:10:03.920 "num_base_bdevs_discovered": 1, 00:10:03.920 "num_base_bdevs_operational": 3, 00:10:03.920 "base_bdevs_list": [ 00:10:03.921 { 00:10:03.921 "name": null, 00:10:03.921 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:03.921 "is_configured": false, 00:10:03.921 "data_offset": 0, 00:10:03.921 "data_size": 65536 00:10:03.921 }, 00:10:03.921 { 00:10:03.921 "name": null, 00:10:03.921 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:03.921 "is_configured": false, 00:10:03.921 "data_offset": 0, 00:10:03.921 "data_size": 65536 00:10:03.921 }, 00:10:03.921 { 00:10:03.921 "name": "BaseBdev3", 00:10:03.921 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:03.921 "is_configured": true, 00:10:03.921 "data_offset": 0, 00:10:03.921 "data_size": 65536 00:10:03.921 } 00:10:03.921 ] 00:10:03.921 }' 00:10:03.921 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.921 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 [2024-11-19 10:04:18.612717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.487 "name": "Existed_Raid", 00:10:04.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.487 "strip_size_kb": 0, 00:10:04.487 "state": "configuring", 00:10:04.487 "raid_level": "raid1", 00:10:04.487 "superblock": false, 00:10:04.487 "num_base_bdevs": 3, 00:10:04.487 "num_base_bdevs_discovered": 2, 00:10:04.487 "num_base_bdevs_operational": 3, 00:10:04.487 "base_bdevs_list": [ 00:10:04.487 { 00:10:04.487 "name": null, 00:10:04.487 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:04.487 "is_configured": false, 00:10:04.487 "data_offset": 0, 00:10:04.487 "data_size": 65536 00:10:04.487 }, 00:10:04.487 { 00:10:04.487 "name": "BaseBdev2", 00:10:04.487 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:04.487 "is_configured": true, 00:10:04.487 "data_offset": 0, 00:10:04.487 "data_size": 65536 00:10:04.487 }, 00:10:04.487 { 00:10:04.487 "name": "BaseBdev3", 00:10:04.487 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:04.487 "is_configured": true, 00:10:04.487 "data_offset": 0, 00:10:04.487 "data_size": 65536 00:10:04.487 } 00:10:04.487 ] 00:10:04.487 }' 00:10:04.487 10:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.488 10:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9faed237-47c0-43f8-95be-46b6dfb0a7eb 00:10:05.054 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.055 [2024-11-19 10:04:19.233897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:05.055 [2024-11-19 10:04:19.233987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.055 [2024-11-19 10:04:19.234001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:05.055 [2024-11-19 10:04:19.234332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:05.055 [2024-11-19 10:04:19.234562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.055 [2024-11-19 10:04:19.234594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:05.055 [2024-11-19 10:04:19.234943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.055 NewBaseBdev 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.055 [ 00:10:05.055 { 00:10:05.055 "name": "NewBaseBdev", 00:10:05.055 "aliases": [ 00:10:05.055 "9faed237-47c0-43f8-95be-46b6dfb0a7eb" 00:10:05.055 ], 00:10:05.055 "product_name": "Malloc disk", 00:10:05.055 "block_size": 512, 00:10:05.055 "num_blocks": 65536, 00:10:05.055 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:05.055 "assigned_rate_limits": { 00:10:05.055 "rw_ios_per_sec": 0, 00:10:05.055 "rw_mbytes_per_sec": 0, 00:10:05.055 "r_mbytes_per_sec": 0, 00:10:05.055 "w_mbytes_per_sec": 0 00:10:05.055 }, 00:10:05.055 "claimed": true, 00:10:05.055 "claim_type": "exclusive_write", 00:10:05.055 "zoned": false, 00:10:05.055 "supported_io_types": { 00:10:05.055 "read": true, 00:10:05.055 "write": true, 00:10:05.055 "unmap": true, 00:10:05.055 "flush": true, 00:10:05.055 "reset": true, 00:10:05.055 "nvme_admin": false, 00:10:05.055 "nvme_io": false, 00:10:05.055 "nvme_io_md": false, 00:10:05.055 "write_zeroes": true, 00:10:05.055 "zcopy": true, 00:10:05.055 "get_zone_info": false, 00:10:05.055 "zone_management": false, 00:10:05.055 "zone_append": false, 00:10:05.055 "compare": false, 00:10:05.055 "compare_and_write": false, 00:10:05.055 "abort": true, 00:10:05.055 "seek_hole": false, 00:10:05.055 "seek_data": false, 00:10:05.055 "copy": true, 00:10:05.055 "nvme_iov_md": false 00:10:05.055 }, 00:10:05.055 "memory_domains": [ 00:10:05.055 { 00:10:05.055 "dma_device_id": "system", 00:10:05.055 "dma_device_type": 1 00:10:05.055 }, 00:10:05.055 { 00:10:05.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.055 "dma_device_type": 2 00:10:05.055 } 00:10:05.055 ], 00:10:05.055 "driver_specific": {} 00:10:05.055 } 00:10:05.055 ] 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.055 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.313 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.313 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.313 "name": "Existed_Raid", 00:10:05.313 "uuid": "3d905d16-1208-458e-8771-8125123d371d", 00:10:05.313 "strip_size_kb": 0, 00:10:05.313 "state": "online", 00:10:05.313 "raid_level": "raid1", 00:10:05.313 "superblock": false, 00:10:05.313 "num_base_bdevs": 3, 00:10:05.313 "num_base_bdevs_discovered": 3, 00:10:05.313 "num_base_bdevs_operational": 3, 00:10:05.313 "base_bdevs_list": [ 00:10:05.313 { 00:10:05.313 "name": "NewBaseBdev", 00:10:05.313 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:05.313 "is_configured": true, 00:10:05.313 "data_offset": 0, 00:10:05.313 "data_size": 65536 00:10:05.313 }, 00:10:05.313 { 00:10:05.313 "name": "BaseBdev2", 00:10:05.313 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:05.313 "is_configured": true, 00:10:05.313 "data_offset": 0, 00:10:05.313 "data_size": 65536 00:10:05.313 }, 00:10:05.313 { 00:10:05.313 "name": "BaseBdev3", 00:10:05.313 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:05.313 "is_configured": true, 00:10:05.313 "data_offset": 0, 00:10:05.313 "data_size": 65536 00:10:05.313 } 00:10:05.313 ] 00:10:05.313 }' 00:10:05.313 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.313 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.571 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.829 [2024-11-19 10:04:19.806493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.829 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.829 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.829 "name": "Existed_Raid", 00:10:05.829 "aliases": [ 00:10:05.829 "3d905d16-1208-458e-8771-8125123d371d" 00:10:05.829 ], 00:10:05.829 "product_name": "Raid Volume", 00:10:05.829 "block_size": 512, 00:10:05.829 "num_blocks": 65536, 00:10:05.830 "uuid": "3d905d16-1208-458e-8771-8125123d371d", 00:10:05.830 "assigned_rate_limits": { 00:10:05.830 "rw_ios_per_sec": 0, 00:10:05.830 "rw_mbytes_per_sec": 0, 00:10:05.830 "r_mbytes_per_sec": 0, 00:10:05.830 "w_mbytes_per_sec": 0 00:10:05.830 }, 00:10:05.830 "claimed": false, 00:10:05.830 "zoned": false, 00:10:05.830 "supported_io_types": { 00:10:05.830 "read": true, 00:10:05.830 "write": true, 00:10:05.830 "unmap": false, 00:10:05.830 "flush": false, 00:10:05.830 "reset": true, 00:10:05.830 "nvme_admin": false, 00:10:05.830 "nvme_io": false, 00:10:05.830 "nvme_io_md": false, 00:10:05.830 "write_zeroes": true, 00:10:05.830 "zcopy": false, 00:10:05.830 "get_zone_info": false, 00:10:05.830 "zone_management": false, 00:10:05.830 "zone_append": false, 00:10:05.830 "compare": false, 00:10:05.830 "compare_and_write": false, 00:10:05.830 "abort": false, 00:10:05.830 "seek_hole": false, 00:10:05.830 "seek_data": false, 00:10:05.830 "copy": false, 00:10:05.830 "nvme_iov_md": false 00:10:05.830 }, 00:10:05.830 "memory_domains": [ 00:10:05.830 { 00:10:05.830 "dma_device_id": "system", 00:10:05.830 "dma_device_type": 1 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.830 "dma_device_type": 2 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "dma_device_id": "system", 00:10:05.830 "dma_device_type": 1 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.830 "dma_device_type": 2 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "dma_device_id": "system", 00:10:05.830 "dma_device_type": 1 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.830 "dma_device_type": 2 00:10:05.830 } 00:10:05.830 ], 00:10:05.830 "driver_specific": { 00:10:05.830 "raid": { 00:10:05.830 "uuid": "3d905d16-1208-458e-8771-8125123d371d", 00:10:05.830 "strip_size_kb": 0, 00:10:05.830 "state": "online", 00:10:05.830 "raid_level": "raid1", 00:10:05.830 "superblock": false, 00:10:05.830 "num_base_bdevs": 3, 00:10:05.830 "num_base_bdevs_discovered": 3, 00:10:05.830 "num_base_bdevs_operational": 3, 00:10:05.830 "base_bdevs_list": [ 00:10:05.830 { 00:10:05.830 "name": "NewBaseBdev", 00:10:05.830 "uuid": "9faed237-47c0-43f8-95be-46b6dfb0a7eb", 00:10:05.830 "is_configured": true, 00:10:05.830 "data_offset": 0, 00:10:05.830 "data_size": 65536 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "name": "BaseBdev2", 00:10:05.830 "uuid": "7c46df43-357d-4f40-8c25-b419e212d1b9", 00:10:05.830 "is_configured": true, 00:10:05.830 "data_offset": 0, 00:10:05.830 "data_size": 65536 00:10:05.830 }, 00:10:05.830 { 00:10:05.830 "name": "BaseBdev3", 00:10:05.830 "uuid": "afeb50a5-a482-41a8-808a-c3ca91758ef1", 00:10:05.830 "is_configured": true, 00:10:05.830 "data_offset": 0, 00:10:05.830 "data_size": 65536 00:10:05.830 } 00:10:05.830 ] 00:10:05.830 } 00:10:05.830 } 00:10:05.830 }' 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:05.830 BaseBdev2 00:10:05.830 BaseBdev3' 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.830 10:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.830 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.088 [2024-11-19 10:04:20.118208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.088 [2024-11-19 10:04:20.118260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.088 [2024-11-19 10:04:20.118381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.088 [2024-11-19 10:04:20.118810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.088 [2024-11-19 10:04:20.118839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67313 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67313 ']' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67313 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67313 00:10:06.088 killing process with pid 67313 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67313' 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67313 00:10:06.088 [2024-11-19 10:04:20.154321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.088 10:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67313 00:10:06.347 [2024-11-19 10:04:20.447869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.721 10:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:07.721 00:10:07.721 real 0m11.677s 00:10:07.721 user 0m19.088s 00:10:07.721 sys 0m1.698s 00:10:07.721 10:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.721 ************************************ 00:10:07.721 END TEST raid_state_function_test 00:10:07.722 ************************************ 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.722 10:04:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:07.722 10:04:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.722 10:04:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.722 10:04:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.722 ************************************ 00:10:07.722 START TEST raid_state_function_test_sb 00:10:07.722 ************************************ 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67952 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.722 Process raid pid: 67952 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67952' 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67952 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67952 ']' 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.722 10:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.722 [2024-11-19 10:04:21.688589] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:07.722 [2024-11-19 10:04:21.688769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.722 [2024-11-19 10:04:21.866490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.979 [2024-11-19 10:04:22.013362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.237 [2024-11-19 10:04:22.241581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.237 [2024-11-19 10:04:22.241632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.496 [2024-11-19 10:04:22.715661] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.496 [2024-11-19 10:04:22.715743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.496 [2024-11-19 10:04:22.715763] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.496 [2024-11-19 10:04:22.715798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.496 [2024-11-19 10:04:22.715812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.496 [2024-11-19 10:04:22.715829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.496 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.755 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.755 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.755 "name": "Existed_Raid", 00:10:08.755 "uuid": "7ce2f110-ebe2-4749-a047-1a11b5af35c9", 00:10:08.755 "strip_size_kb": 0, 00:10:08.755 "state": "configuring", 00:10:08.755 "raid_level": "raid1", 00:10:08.755 "superblock": true, 00:10:08.755 "num_base_bdevs": 3, 00:10:08.755 "num_base_bdevs_discovered": 0, 00:10:08.755 "num_base_bdevs_operational": 3, 00:10:08.755 "base_bdevs_list": [ 00:10:08.755 { 00:10:08.755 "name": "BaseBdev1", 00:10:08.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.755 "is_configured": false, 00:10:08.755 "data_offset": 0, 00:10:08.755 "data_size": 0 00:10:08.755 }, 00:10:08.755 { 00:10:08.755 "name": "BaseBdev2", 00:10:08.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.755 "is_configured": false, 00:10:08.756 "data_offset": 0, 00:10:08.756 "data_size": 0 00:10:08.756 }, 00:10:08.756 { 00:10:08.756 "name": "BaseBdev3", 00:10:08.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.756 "is_configured": false, 00:10:08.756 "data_offset": 0, 00:10:08.756 "data_size": 0 00:10:08.756 } 00:10:08.756 ] 00:10:08.756 }' 00:10:08.756 10:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.756 10:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 [2024-11-19 10:04:23.299735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.323 [2024-11-19 10:04:23.299820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 [2024-11-19 10:04:23.307747] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.323 [2024-11-19 10:04:23.307837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.323 [2024-11-19 10:04:23.307855] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.323 [2024-11-19 10:04:23.307873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.323 [2024-11-19 10:04:23.307883] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.323 [2024-11-19 10:04:23.307898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 [2024-11-19 10:04:23.357111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.323 BaseBdev1 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 [ 00:10:09.323 { 00:10:09.323 "name": "BaseBdev1", 00:10:09.323 "aliases": [ 00:10:09.323 "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb" 00:10:09.323 ], 00:10:09.323 "product_name": "Malloc disk", 00:10:09.323 "block_size": 512, 00:10:09.323 "num_blocks": 65536, 00:10:09.323 "uuid": "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb", 00:10:09.323 "assigned_rate_limits": { 00:10:09.323 "rw_ios_per_sec": 0, 00:10:09.323 "rw_mbytes_per_sec": 0, 00:10:09.323 "r_mbytes_per_sec": 0, 00:10:09.323 "w_mbytes_per_sec": 0 00:10:09.323 }, 00:10:09.323 "claimed": true, 00:10:09.323 "claim_type": "exclusive_write", 00:10:09.323 "zoned": false, 00:10:09.323 "supported_io_types": { 00:10:09.323 "read": true, 00:10:09.323 "write": true, 00:10:09.323 "unmap": true, 00:10:09.323 "flush": true, 00:10:09.323 "reset": true, 00:10:09.323 "nvme_admin": false, 00:10:09.323 "nvme_io": false, 00:10:09.323 "nvme_io_md": false, 00:10:09.323 "write_zeroes": true, 00:10:09.323 "zcopy": true, 00:10:09.323 "get_zone_info": false, 00:10:09.323 "zone_management": false, 00:10:09.323 "zone_append": false, 00:10:09.323 "compare": false, 00:10:09.323 "compare_and_write": false, 00:10:09.323 "abort": true, 00:10:09.323 "seek_hole": false, 00:10:09.323 "seek_data": false, 00:10:09.323 "copy": true, 00:10:09.323 "nvme_iov_md": false 00:10:09.323 }, 00:10:09.323 "memory_domains": [ 00:10:09.323 { 00:10:09.323 "dma_device_id": "system", 00:10:09.323 "dma_device_type": 1 00:10:09.323 }, 00:10:09.323 { 00:10:09.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.323 "dma_device_type": 2 00:10:09.323 } 00:10:09.323 ], 00:10:09.323 "driver_specific": {} 00:10:09.323 } 00:10:09.323 ] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.323 "name": "Existed_Raid", 00:10:09.323 "uuid": "de5293c2-5917-493c-9374-8446e5d8dff6", 00:10:09.323 "strip_size_kb": 0, 00:10:09.323 "state": "configuring", 00:10:09.323 "raid_level": "raid1", 00:10:09.323 "superblock": true, 00:10:09.323 "num_base_bdevs": 3, 00:10:09.323 "num_base_bdevs_discovered": 1, 00:10:09.323 "num_base_bdevs_operational": 3, 00:10:09.323 "base_bdevs_list": [ 00:10:09.323 { 00:10:09.323 "name": "BaseBdev1", 00:10:09.323 "uuid": "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb", 00:10:09.323 "is_configured": true, 00:10:09.323 "data_offset": 2048, 00:10:09.323 "data_size": 63488 00:10:09.323 }, 00:10:09.323 { 00:10:09.323 "name": "BaseBdev2", 00:10:09.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.323 "is_configured": false, 00:10:09.323 "data_offset": 0, 00:10:09.323 "data_size": 0 00:10:09.323 }, 00:10:09.323 { 00:10:09.323 "name": "BaseBdev3", 00:10:09.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.323 "is_configured": false, 00:10:09.323 "data_offset": 0, 00:10:09.323 "data_size": 0 00:10:09.323 } 00:10:09.323 ] 00:10:09.323 }' 00:10:09.324 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.324 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.890 [2024-11-19 10:04:23.869310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.890 [2024-11-19 10:04:23.869386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.890 [2024-11-19 10:04:23.877355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.890 [2024-11-19 10:04:23.880193] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.890 [2024-11-19 10:04:23.880252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.890 [2024-11-19 10:04:23.880271] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.890 [2024-11-19 10:04:23.880287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.890 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.890 "name": "Existed_Raid", 00:10:09.890 "uuid": "4e076981-f976-4049-9178-d1ebe75f8cf1", 00:10:09.890 "strip_size_kb": 0, 00:10:09.890 "state": "configuring", 00:10:09.890 "raid_level": "raid1", 00:10:09.890 "superblock": true, 00:10:09.890 "num_base_bdevs": 3, 00:10:09.890 "num_base_bdevs_discovered": 1, 00:10:09.890 "num_base_bdevs_operational": 3, 00:10:09.890 "base_bdevs_list": [ 00:10:09.890 { 00:10:09.890 "name": "BaseBdev1", 00:10:09.890 "uuid": "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb", 00:10:09.890 "is_configured": true, 00:10:09.890 "data_offset": 2048, 00:10:09.890 "data_size": 63488 00:10:09.890 }, 00:10:09.890 { 00:10:09.890 "name": "BaseBdev2", 00:10:09.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.891 "is_configured": false, 00:10:09.891 "data_offset": 0, 00:10:09.891 "data_size": 0 00:10:09.891 }, 00:10:09.891 { 00:10:09.891 "name": "BaseBdev3", 00:10:09.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.891 "is_configured": false, 00:10:09.891 "data_offset": 0, 00:10:09.891 "data_size": 0 00:10:09.891 } 00:10:09.891 ] 00:10:09.891 }' 00:10:09.891 10:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.891 10:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.148 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.148 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.148 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.407 [2024-11-19 10:04:24.411599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.407 BaseBdev2 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.407 [ 00:10:10.407 { 00:10:10.407 "name": "BaseBdev2", 00:10:10.407 "aliases": [ 00:10:10.407 "40d6543e-6ff9-4b0e-803c-87e873c2b6a0" 00:10:10.407 ], 00:10:10.407 "product_name": "Malloc disk", 00:10:10.407 "block_size": 512, 00:10:10.407 "num_blocks": 65536, 00:10:10.407 "uuid": "40d6543e-6ff9-4b0e-803c-87e873c2b6a0", 00:10:10.407 "assigned_rate_limits": { 00:10:10.407 "rw_ios_per_sec": 0, 00:10:10.407 "rw_mbytes_per_sec": 0, 00:10:10.407 "r_mbytes_per_sec": 0, 00:10:10.407 "w_mbytes_per_sec": 0 00:10:10.407 }, 00:10:10.407 "claimed": true, 00:10:10.407 "claim_type": "exclusive_write", 00:10:10.407 "zoned": false, 00:10:10.407 "supported_io_types": { 00:10:10.407 "read": true, 00:10:10.407 "write": true, 00:10:10.407 "unmap": true, 00:10:10.407 "flush": true, 00:10:10.407 "reset": true, 00:10:10.407 "nvme_admin": false, 00:10:10.407 "nvme_io": false, 00:10:10.407 "nvme_io_md": false, 00:10:10.407 "write_zeroes": true, 00:10:10.407 "zcopy": true, 00:10:10.407 "get_zone_info": false, 00:10:10.407 "zone_management": false, 00:10:10.407 "zone_append": false, 00:10:10.407 "compare": false, 00:10:10.407 "compare_and_write": false, 00:10:10.407 "abort": true, 00:10:10.407 "seek_hole": false, 00:10:10.407 "seek_data": false, 00:10:10.407 "copy": true, 00:10:10.407 "nvme_iov_md": false 00:10:10.407 }, 00:10:10.407 "memory_domains": [ 00:10:10.407 { 00:10:10.407 "dma_device_id": "system", 00:10:10.407 "dma_device_type": 1 00:10:10.407 }, 00:10:10.407 { 00:10:10.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.407 "dma_device_type": 2 00:10:10.407 } 00:10:10.407 ], 00:10:10.407 "driver_specific": {} 00:10:10.407 } 00:10:10.407 ] 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.407 "name": "Existed_Raid", 00:10:10.407 "uuid": "4e076981-f976-4049-9178-d1ebe75f8cf1", 00:10:10.407 "strip_size_kb": 0, 00:10:10.407 "state": "configuring", 00:10:10.407 "raid_level": "raid1", 00:10:10.407 "superblock": true, 00:10:10.407 "num_base_bdevs": 3, 00:10:10.407 "num_base_bdevs_discovered": 2, 00:10:10.407 "num_base_bdevs_operational": 3, 00:10:10.407 "base_bdevs_list": [ 00:10:10.407 { 00:10:10.407 "name": "BaseBdev1", 00:10:10.407 "uuid": "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb", 00:10:10.407 "is_configured": true, 00:10:10.407 "data_offset": 2048, 00:10:10.407 "data_size": 63488 00:10:10.407 }, 00:10:10.407 { 00:10:10.407 "name": "BaseBdev2", 00:10:10.407 "uuid": "40d6543e-6ff9-4b0e-803c-87e873c2b6a0", 00:10:10.407 "is_configured": true, 00:10:10.407 "data_offset": 2048, 00:10:10.407 "data_size": 63488 00:10:10.407 }, 00:10:10.407 { 00:10:10.407 "name": "BaseBdev3", 00:10:10.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.407 "is_configured": false, 00:10:10.407 "data_offset": 0, 00:10:10.407 "data_size": 0 00:10:10.407 } 00:10:10.407 ] 00:10:10.407 }' 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.407 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.973 [2024-11-19 10:04:24.961888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.973 [2024-11-19 10:04:24.962271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.973 [2024-11-19 10:04:24.962301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.973 BaseBdev3 00:10:10.973 [2024-11-19 10:04:24.962663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:10.973 [2024-11-19 10:04:24.962927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.973 [2024-11-19 10:04:24.962957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:10.973 [2024-11-19 10:04:24.963185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.973 [ 00:10:10.973 { 00:10:10.973 "name": "BaseBdev3", 00:10:10.973 "aliases": [ 00:10:10.973 "2cf428c9-c7c2-48e0-a26c-e355add0e6e1" 00:10:10.973 ], 00:10:10.973 "product_name": "Malloc disk", 00:10:10.973 "block_size": 512, 00:10:10.973 "num_blocks": 65536, 00:10:10.973 "uuid": "2cf428c9-c7c2-48e0-a26c-e355add0e6e1", 00:10:10.973 "assigned_rate_limits": { 00:10:10.973 "rw_ios_per_sec": 0, 00:10:10.973 "rw_mbytes_per_sec": 0, 00:10:10.973 "r_mbytes_per_sec": 0, 00:10:10.973 "w_mbytes_per_sec": 0 00:10:10.973 }, 00:10:10.973 "claimed": true, 00:10:10.973 "claim_type": "exclusive_write", 00:10:10.973 "zoned": false, 00:10:10.973 "supported_io_types": { 00:10:10.973 "read": true, 00:10:10.973 "write": true, 00:10:10.973 "unmap": true, 00:10:10.973 "flush": true, 00:10:10.973 "reset": true, 00:10:10.973 "nvme_admin": false, 00:10:10.973 "nvme_io": false, 00:10:10.973 "nvme_io_md": false, 00:10:10.973 "write_zeroes": true, 00:10:10.973 "zcopy": true, 00:10:10.973 "get_zone_info": false, 00:10:10.973 "zone_management": false, 00:10:10.973 "zone_append": false, 00:10:10.973 "compare": false, 00:10:10.973 "compare_and_write": false, 00:10:10.973 "abort": true, 00:10:10.973 "seek_hole": false, 00:10:10.973 "seek_data": false, 00:10:10.973 "copy": true, 00:10:10.973 "nvme_iov_md": false 00:10:10.973 }, 00:10:10.973 "memory_domains": [ 00:10:10.973 { 00:10:10.973 "dma_device_id": "system", 00:10:10.973 "dma_device_type": 1 00:10:10.973 }, 00:10:10.973 { 00:10:10.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.973 "dma_device_type": 2 00:10:10.973 } 00:10:10.973 ], 00:10:10.973 "driver_specific": {} 00:10:10.973 } 00:10:10.973 ] 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.973 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.974 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.974 10:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.974 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.974 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.974 "name": "Existed_Raid", 00:10:10.974 "uuid": "4e076981-f976-4049-9178-d1ebe75f8cf1", 00:10:10.974 "strip_size_kb": 0, 00:10:10.974 "state": "online", 00:10:10.974 "raid_level": "raid1", 00:10:10.974 "superblock": true, 00:10:10.974 "num_base_bdevs": 3, 00:10:10.974 "num_base_bdevs_discovered": 3, 00:10:10.974 "num_base_bdevs_operational": 3, 00:10:10.974 "base_bdevs_list": [ 00:10:10.974 { 00:10:10.974 "name": "BaseBdev1", 00:10:10.974 "uuid": "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb", 00:10:10.974 "is_configured": true, 00:10:10.974 "data_offset": 2048, 00:10:10.974 "data_size": 63488 00:10:10.974 }, 00:10:10.974 { 00:10:10.974 "name": "BaseBdev2", 00:10:10.974 "uuid": "40d6543e-6ff9-4b0e-803c-87e873c2b6a0", 00:10:10.974 "is_configured": true, 00:10:10.974 "data_offset": 2048, 00:10:10.974 "data_size": 63488 00:10:10.974 }, 00:10:10.974 { 00:10:10.974 "name": "BaseBdev3", 00:10:10.974 "uuid": "2cf428c9-c7c2-48e0-a26c-e355add0e6e1", 00:10:10.974 "is_configured": true, 00:10:10.974 "data_offset": 2048, 00:10:10.974 "data_size": 63488 00:10:10.974 } 00:10:10.974 ] 00:10:10.974 }' 00:10:10.974 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.974 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 [2024-11-19 10:04:25.478496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.539 "name": "Existed_Raid", 00:10:11.539 "aliases": [ 00:10:11.539 "4e076981-f976-4049-9178-d1ebe75f8cf1" 00:10:11.539 ], 00:10:11.539 "product_name": "Raid Volume", 00:10:11.539 "block_size": 512, 00:10:11.539 "num_blocks": 63488, 00:10:11.539 "uuid": "4e076981-f976-4049-9178-d1ebe75f8cf1", 00:10:11.539 "assigned_rate_limits": { 00:10:11.539 "rw_ios_per_sec": 0, 00:10:11.539 "rw_mbytes_per_sec": 0, 00:10:11.539 "r_mbytes_per_sec": 0, 00:10:11.539 "w_mbytes_per_sec": 0 00:10:11.539 }, 00:10:11.539 "claimed": false, 00:10:11.539 "zoned": false, 00:10:11.539 "supported_io_types": { 00:10:11.539 "read": true, 00:10:11.539 "write": true, 00:10:11.539 "unmap": false, 00:10:11.539 "flush": false, 00:10:11.539 "reset": true, 00:10:11.539 "nvme_admin": false, 00:10:11.539 "nvme_io": false, 00:10:11.539 "nvme_io_md": false, 00:10:11.539 "write_zeroes": true, 00:10:11.539 "zcopy": false, 00:10:11.539 "get_zone_info": false, 00:10:11.539 "zone_management": false, 00:10:11.539 "zone_append": false, 00:10:11.539 "compare": false, 00:10:11.539 "compare_and_write": false, 00:10:11.539 "abort": false, 00:10:11.539 "seek_hole": false, 00:10:11.539 "seek_data": false, 00:10:11.539 "copy": false, 00:10:11.539 "nvme_iov_md": false 00:10:11.539 }, 00:10:11.539 "memory_domains": [ 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "system", 00:10:11.539 "dma_device_type": 1 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.539 "dma_device_type": 2 00:10:11.539 } 00:10:11.539 ], 00:10:11.539 "driver_specific": { 00:10:11.539 "raid": { 00:10:11.539 "uuid": "4e076981-f976-4049-9178-d1ebe75f8cf1", 00:10:11.539 "strip_size_kb": 0, 00:10:11.539 "state": "online", 00:10:11.539 "raid_level": "raid1", 00:10:11.539 "superblock": true, 00:10:11.539 "num_base_bdevs": 3, 00:10:11.539 "num_base_bdevs_discovered": 3, 00:10:11.539 "num_base_bdevs_operational": 3, 00:10:11.539 "base_bdevs_list": [ 00:10:11.539 { 00:10:11.539 "name": "BaseBdev1", 00:10:11.539 "uuid": "cfb6dc84-b8c9-4c4a-b9dd-17c0d91a98cb", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "name": "BaseBdev2", 00:10:11.539 "uuid": "40d6543e-6ff9-4b0e-803c-87e873c2b6a0", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 }, 00:10:11.539 { 00:10:11.539 "name": "BaseBdev3", 00:10:11.539 "uuid": "2cf428c9-c7c2-48e0-a26c-e355add0e6e1", 00:10:11.539 "is_configured": true, 00:10:11.539 "data_offset": 2048, 00:10:11.539 "data_size": 63488 00:10:11.539 } 00:10:11.539 ] 00:10:11.539 } 00:10:11.539 } 00:10:11.539 }' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.539 BaseBdev2 00:10:11.539 BaseBdev3' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.539 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.540 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.797 [2024-11-19 10:04:25.798272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.797 "name": "Existed_Raid", 00:10:11.797 "uuid": "4e076981-f976-4049-9178-d1ebe75f8cf1", 00:10:11.797 "strip_size_kb": 0, 00:10:11.797 "state": "online", 00:10:11.797 "raid_level": "raid1", 00:10:11.797 "superblock": true, 00:10:11.797 "num_base_bdevs": 3, 00:10:11.797 "num_base_bdevs_discovered": 2, 00:10:11.797 "num_base_bdevs_operational": 2, 00:10:11.797 "base_bdevs_list": [ 00:10:11.797 { 00:10:11.797 "name": null, 00:10:11.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.797 "is_configured": false, 00:10:11.797 "data_offset": 0, 00:10:11.797 "data_size": 63488 00:10:11.797 }, 00:10:11.797 { 00:10:11.797 "name": "BaseBdev2", 00:10:11.797 "uuid": "40d6543e-6ff9-4b0e-803c-87e873c2b6a0", 00:10:11.797 "is_configured": true, 00:10:11.797 "data_offset": 2048, 00:10:11.797 "data_size": 63488 00:10:11.797 }, 00:10:11.797 { 00:10:11.797 "name": "BaseBdev3", 00:10:11.797 "uuid": "2cf428c9-c7c2-48e0-a26c-e355add0e6e1", 00:10:11.797 "is_configured": true, 00:10:11.797 "data_offset": 2048, 00:10:11.797 "data_size": 63488 00:10:11.797 } 00:10:11.797 ] 00:10:11.797 }' 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.797 10:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 [2024-11-19 10:04:26.471699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.622 [2024-11-19 10:04:26.616900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.622 [2024-11-19 10:04:26.617067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.622 [2024-11-19 10:04:26.711080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.622 [2024-11-19 10:04:26.711172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.622 [2024-11-19 10:04:26.711194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.622 BaseBdev2 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.622 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.623 [ 00:10:12.623 { 00:10:12.623 "name": "BaseBdev2", 00:10:12.623 "aliases": [ 00:10:12.623 "b6ac1817-5698-4924-85cd-576c6c7fe3f7" 00:10:12.623 ], 00:10:12.623 "product_name": "Malloc disk", 00:10:12.623 "block_size": 512, 00:10:12.623 "num_blocks": 65536, 00:10:12.623 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:12.623 "assigned_rate_limits": { 00:10:12.623 "rw_ios_per_sec": 0, 00:10:12.623 "rw_mbytes_per_sec": 0, 00:10:12.623 "r_mbytes_per_sec": 0, 00:10:12.623 "w_mbytes_per_sec": 0 00:10:12.623 }, 00:10:12.623 "claimed": false, 00:10:12.623 "zoned": false, 00:10:12.623 "supported_io_types": { 00:10:12.623 "read": true, 00:10:12.623 "write": true, 00:10:12.623 "unmap": true, 00:10:12.623 "flush": true, 00:10:12.623 "reset": true, 00:10:12.623 "nvme_admin": false, 00:10:12.623 "nvme_io": false, 00:10:12.623 "nvme_io_md": false, 00:10:12.623 "write_zeroes": true, 00:10:12.623 "zcopy": true, 00:10:12.623 "get_zone_info": false, 00:10:12.623 "zone_management": false, 00:10:12.623 "zone_append": false, 00:10:12.623 "compare": false, 00:10:12.623 "compare_and_write": false, 00:10:12.623 "abort": true, 00:10:12.623 "seek_hole": false, 00:10:12.623 "seek_data": false, 00:10:12.623 "copy": true, 00:10:12.623 "nvme_iov_md": false 00:10:12.623 }, 00:10:12.623 "memory_domains": [ 00:10:12.623 { 00:10:12.623 "dma_device_id": "system", 00:10:12.623 "dma_device_type": 1 00:10:12.623 }, 00:10:12.623 { 00:10:12.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.623 "dma_device_type": 2 00:10:12.623 } 00:10:12.623 ], 00:10:12.623 "driver_specific": {} 00:10:12.623 } 00:10:12.623 ] 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.623 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.881 BaseBdev3 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.881 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.881 [ 00:10:12.881 { 00:10:12.881 "name": "BaseBdev3", 00:10:12.881 "aliases": [ 00:10:12.881 "553beb0b-66bc-4ade-95d5-afef4b1ce6a8" 00:10:12.881 ], 00:10:12.881 "product_name": "Malloc disk", 00:10:12.881 "block_size": 512, 00:10:12.881 "num_blocks": 65536, 00:10:12.881 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:12.881 "assigned_rate_limits": { 00:10:12.881 "rw_ios_per_sec": 0, 00:10:12.881 "rw_mbytes_per_sec": 0, 00:10:12.881 "r_mbytes_per_sec": 0, 00:10:12.881 "w_mbytes_per_sec": 0 00:10:12.881 }, 00:10:12.881 "claimed": false, 00:10:12.881 "zoned": false, 00:10:12.881 "supported_io_types": { 00:10:12.881 "read": true, 00:10:12.881 "write": true, 00:10:12.881 "unmap": true, 00:10:12.881 "flush": true, 00:10:12.881 "reset": true, 00:10:12.881 "nvme_admin": false, 00:10:12.881 "nvme_io": false, 00:10:12.881 "nvme_io_md": false, 00:10:12.881 "write_zeroes": true, 00:10:12.881 "zcopy": true, 00:10:12.881 "get_zone_info": false, 00:10:12.881 "zone_management": false, 00:10:12.881 "zone_append": false, 00:10:12.881 "compare": false, 00:10:12.881 "compare_and_write": false, 00:10:12.881 "abort": true, 00:10:12.881 "seek_hole": false, 00:10:12.881 "seek_data": false, 00:10:12.882 "copy": true, 00:10:12.882 "nvme_iov_md": false 00:10:12.882 }, 00:10:12.882 "memory_domains": [ 00:10:12.882 { 00:10:12.882 "dma_device_id": "system", 00:10:12.882 "dma_device_type": 1 00:10:12.882 }, 00:10:12.882 { 00:10:12.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.882 "dma_device_type": 2 00:10:12.882 } 00:10:12.882 ], 00:10:12.882 "driver_specific": {} 00:10:12.882 } 00:10:12.882 ] 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 [2024-11-19 10:04:26.935346] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.882 [2024-11-19 10:04:26.935421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.882 [2024-11-19 10:04:26.935460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.882 [2024-11-19 10:04:26.938269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.882 "name": "Existed_Raid", 00:10:12.882 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:12.882 "strip_size_kb": 0, 00:10:12.882 "state": "configuring", 00:10:12.882 "raid_level": "raid1", 00:10:12.882 "superblock": true, 00:10:12.882 "num_base_bdevs": 3, 00:10:12.882 "num_base_bdevs_discovered": 2, 00:10:12.882 "num_base_bdevs_operational": 3, 00:10:12.882 "base_bdevs_list": [ 00:10:12.882 { 00:10:12.882 "name": "BaseBdev1", 00:10:12.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.882 "is_configured": false, 00:10:12.882 "data_offset": 0, 00:10:12.882 "data_size": 0 00:10:12.882 }, 00:10:12.882 { 00:10:12.882 "name": "BaseBdev2", 00:10:12.882 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:12.882 "is_configured": true, 00:10:12.882 "data_offset": 2048, 00:10:12.882 "data_size": 63488 00:10:12.882 }, 00:10:12.882 { 00:10:12.882 "name": "BaseBdev3", 00:10:12.882 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:12.882 "is_configured": true, 00:10:12.882 "data_offset": 2048, 00:10:12.882 "data_size": 63488 00:10:12.882 } 00:10:12.882 ] 00:10:12.882 }' 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.882 10:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.449 [2024-11-19 10:04:27.419469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.449 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.449 "name": "Existed_Raid", 00:10:13.449 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:13.449 "strip_size_kb": 0, 00:10:13.449 "state": "configuring", 00:10:13.449 "raid_level": "raid1", 00:10:13.449 "superblock": true, 00:10:13.449 "num_base_bdevs": 3, 00:10:13.449 "num_base_bdevs_discovered": 1, 00:10:13.449 "num_base_bdevs_operational": 3, 00:10:13.449 "base_bdevs_list": [ 00:10:13.449 { 00:10:13.449 "name": "BaseBdev1", 00:10:13.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.449 "is_configured": false, 00:10:13.449 "data_offset": 0, 00:10:13.449 "data_size": 0 00:10:13.449 }, 00:10:13.449 { 00:10:13.449 "name": null, 00:10:13.449 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:13.449 "is_configured": false, 00:10:13.449 "data_offset": 0, 00:10:13.449 "data_size": 63488 00:10:13.449 }, 00:10:13.449 { 00:10:13.449 "name": "BaseBdev3", 00:10:13.449 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:13.450 "is_configured": true, 00:10:13.450 "data_offset": 2048, 00:10:13.450 "data_size": 63488 00:10:13.450 } 00:10:13.450 ] 00:10:13.450 }' 00:10:13.450 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.450 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.018 10:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.018 [2024-11-19 10:04:28.033046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.018 BaseBdev1 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.018 [ 00:10:14.018 { 00:10:14.018 "name": "BaseBdev1", 00:10:14.018 "aliases": [ 00:10:14.018 "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4" 00:10:14.018 ], 00:10:14.018 "product_name": "Malloc disk", 00:10:14.018 "block_size": 512, 00:10:14.018 "num_blocks": 65536, 00:10:14.018 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:14.018 "assigned_rate_limits": { 00:10:14.018 "rw_ios_per_sec": 0, 00:10:14.018 "rw_mbytes_per_sec": 0, 00:10:14.018 "r_mbytes_per_sec": 0, 00:10:14.018 "w_mbytes_per_sec": 0 00:10:14.018 }, 00:10:14.018 "claimed": true, 00:10:14.018 "claim_type": "exclusive_write", 00:10:14.018 "zoned": false, 00:10:14.018 "supported_io_types": { 00:10:14.018 "read": true, 00:10:14.018 "write": true, 00:10:14.018 "unmap": true, 00:10:14.018 "flush": true, 00:10:14.018 "reset": true, 00:10:14.018 "nvme_admin": false, 00:10:14.018 "nvme_io": false, 00:10:14.018 "nvme_io_md": false, 00:10:14.018 "write_zeroes": true, 00:10:14.018 "zcopy": true, 00:10:14.018 "get_zone_info": false, 00:10:14.018 "zone_management": false, 00:10:14.018 "zone_append": false, 00:10:14.018 "compare": false, 00:10:14.018 "compare_and_write": false, 00:10:14.018 "abort": true, 00:10:14.018 "seek_hole": false, 00:10:14.018 "seek_data": false, 00:10:14.018 "copy": true, 00:10:14.018 "nvme_iov_md": false 00:10:14.018 }, 00:10:14.018 "memory_domains": [ 00:10:14.018 { 00:10:14.018 "dma_device_id": "system", 00:10:14.018 "dma_device_type": 1 00:10:14.018 }, 00:10:14.018 { 00:10:14.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.018 "dma_device_type": 2 00:10:14.018 } 00:10:14.018 ], 00:10:14.018 "driver_specific": {} 00:10:14.018 } 00:10:14.018 ] 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.018 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.019 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.019 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.019 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.019 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.019 "name": "Existed_Raid", 00:10:14.019 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:14.019 "strip_size_kb": 0, 00:10:14.019 "state": "configuring", 00:10:14.019 "raid_level": "raid1", 00:10:14.019 "superblock": true, 00:10:14.019 "num_base_bdevs": 3, 00:10:14.019 "num_base_bdevs_discovered": 2, 00:10:14.019 "num_base_bdevs_operational": 3, 00:10:14.019 "base_bdevs_list": [ 00:10:14.019 { 00:10:14.019 "name": "BaseBdev1", 00:10:14.019 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:14.019 "is_configured": true, 00:10:14.019 "data_offset": 2048, 00:10:14.019 "data_size": 63488 00:10:14.019 }, 00:10:14.019 { 00:10:14.019 "name": null, 00:10:14.019 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:14.019 "is_configured": false, 00:10:14.019 "data_offset": 0, 00:10:14.019 "data_size": 63488 00:10:14.019 }, 00:10:14.019 { 00:10:14.019 "name": "BaseBdev3", 00:10:14.019 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:14.019 "is_configured": true, 00:10:14.019 "data_offset": 2048, 00:10:14.019 "data_size": 63488 00:10:14.019 } 00:10:14.019 ] 00:10:14.019 }' 00:10:14.019 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.019 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.586 [2024-11-19 10:04:28.645272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.586 "name": "Existed_Raid", 00:10:14.586 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:14.586 "strip_size_kb": 0, 00:10:14.586 "state": "configuring", 00:10:14.586 "raid_level": "raid1", 00:10:14.586 "superblock": true, 00:10:14.586 "num_base_bdevs": 3, 00:10:14.586 "num_base_bdevs_discovered": 1, 00:10:14.586 "num_base_bdevs_operational": 3, 00:10:14.586 "base_bdevs_list": [ 00:10:14.586 { 00:10:14.586 "name": "BaseBdev1", 00:10:14.586 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:14.586 "is_configured": true, 00:10:14.586 "data_offset": 2048, 00:10:14.586 "data_size": 63488 00:10:14.586 }, 00:10:14.586 { 00:10:14.586 "name": null, 00:10:14.586 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:14.586 "is_configured": false, 00:10:14.586 "data_offset": 0, 00:10:14.586 "data_size": 63488 00:10:14.586 }, 00:10:14.586 { 00:10:14.586 "name": null, 00:10:14.586 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:14.586 "is_configured": false, 00:10:14.586 "data_offset": 0, 00:10:14.586 "data_size": 63488 00:10:14.586 } 00:10:14.586 ] 00:10:14.586 }' 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.586 10:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 [2024-11-19 10:04:29.209473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.153 "name": "Existed_Raid", 00:10:15.153 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:15.153 "strip_size_kb": 0, 00:10:15.153 "state": "configuring", 00:10:15.153 "raid_level": "raid1", 00:10:15.153 "superblock": true, 00:10:15.153 "num_base_bdevs": 3, 00:10:15.153 "num_base_bdevs_discovered": 2, 00:10:15.153 "num_base_bdevs_operational": 3, 00:10:15.153 "base_bdevs_list": [ 00:10:15.153 { 00:10:15.153 "name": "BaseBdev1", 00:10:15.153 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:15.153 "is_configured": true, 00:10:15.153 "data_offset": 2048, 00:10:15.153 "data_size": 63488 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "name": null, 00:10:15.153 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:15.153 "is_configured": false, 00:10:15.153 "data_offset": 0, 00:10:15.153 "data_size": 63488 00:10:15.153 }, 00:10:15.153 { 00:10:15.153 "name": "BaseBdev3", 00:10:15.153 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:15.153 "is_configured": true, 00:10:15.153 "data_offset": 2048, 00:10:15.153 "data_size": 63488 00:10:15.153 } 00:10:15.153 ] 00:10:15.153 }' 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.153 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 [2024-11-19 10:04:29.781619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.721 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.721 "name": "Existed_Raid", 00:10:15.721 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:15.721 "strip_size_kb": 0, 00:10:15.721 "state": "configuring", 00:10:15.722 "raid_level": "raid1", 00:10:15.722 "superblock": true, 00:10:15.722 "num_base_bdevs": 3, 00:10:15.722 "num_base_bdevs_discovered": 1, 00:10:15.722 "num_base_bdevs_operational": 3, 00:10:15.722 "base_bdevs_list": [ 00:10:15.722 { 00:10:15.722 "name": null, 00:10:15.722 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:15.722 "is_configured": false, 00:10:15.722 "data_offset": 0, 00:10:15.722 "data_size": 63488 00:10:15.722 }, 00:10:15.722 { 00:10:15.722 "name": null, 00:10:15.722 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:15.722 "is_configured": false, 00:10:15.722 "data_offset": 0, 00:10:15.722 "data_size": 63488 00:10:15.722 }, 00:10:15.722 { 00:10:15.722 "name": "BaseBdev3", 00:10:15.722 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:15.722 "is_configured": true, 00:10:15.722 "data_offset": 2048, 00:10:15.722 "data_size": 63488 00:10:15.722 } 00:10:15.722 ] 00:10:15.722 }' 00:10:15.722 10:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.722 10:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.289 [2024-11-19 10:04:30.438682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.289 "name": "Existed_Raid", 00:10:16.289 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:16.289 "strip_size_kb": 0, 00:10:16.289 "state": "configuring", 00:10:16.289 "raid_level": "raid1", 00:10:16.289 "superblock": true, 00:10:16.289 "num_base_bdevs": 3, 00:10:16.289 "num_base_bdevs_discovered": 2, 00:10:16.289 "num_base_bdevs_operational": 3, 00:10:16.289 "base_bdevs_list": [ 00:10:16.289 { 00:10:16.289 "name": null, 00:10:16.289 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:16.289 "is_configured": false, 00:10:16.289 "data_offset": 0, 00:10:16.289 "data_size": 63488 00:10:16.289 }, 00:10:16.289 { 00:10:16.289 "name": "BaseBdev2", 00:10:16.289 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:16.289 "is_configured": true, 00:10:16.289 "data_offset": 2048, 00:10:16.289 "data_size": 63488 00:10:16.289 }, 00:10:16.289 { 00:10:16.289 "name": "BaseBdev3", 00:10:16.289 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:16.289 "is_configured": true, 00:10:16.289 "data_offset": 2048, 00:10:16.289 "data_size": 63488 00:10:16.289 } 00:10:16.289 ] 00:10:16.289 }' 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.289 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.855 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.855 10:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.855 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.855 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.855 10:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.855 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 [2024-11-19 10:04:31.108134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:17.114 [2024-11-19 10:04:31.108467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.114 [2024-11-19 10:04:31.108485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.114 NewBaseBdev 00:10:17.114 [2024-11-19 10:04:31.108827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:17.114 [2024-11-19 10:04:31.109038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.114 [2024-11-19 10:04:31.109061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:17.114 [2024-11-19 10:04:31.109228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 [ 00:10:17.114 { 00:10:17.114 "name": "NewBaseBdev", 00:10:17.114 "aliases": [ 00:10:17.114 "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4" 00:10:17.114 ], 00:10:17.114 "product_name": "Malloc disk", 00:10:17.114 "block_size": 512, 00:10:17.114 "num_blocks": 65536, 00:10:17.114 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:17.114 "assigned_rate_limits": { 00:10:17.114 "rw_ios_per_sec": 0, 00:10:17.114 "rw_mbytes_per_sec": 0, 00:10:17.114 "r_mbytes_per_sec": 0, 00:10:17.114 "w_mbytes_per_sec": 0 00:10:17.114 }, 00:10:17.114 "claimed": true, 00:10:17.114 "claim_type": "exclusive_write", 00:10:17.114 "zoned": false, 00:10:17.114 "supported_io_types": { 00:10:17.114 "read": true, 00:10:17.114 "write": true, 00:10:17.114 "unmap": true, 00:10:17.114 "flush": true, 00:10:17.114 "reset": true, 00:10:17.114 "nvme_admin": false, 00:10:17.114 "nvme_io": false, 00:10:17.114 "nvme_io_md": false, 00:10:17.114 "write_zeroes": true, 00:10:17.114 "zcopy": true, 00:10:17.114 "get_zone_info": false, 00:10:17.114 "zone_management": false, 00:10:17.114 "zone_append": false, 00:10:17.114 "compare": false, 00:10:17.114 "compare_and_write": false, 00:10:17.114 "abort": true, 00:10:17.114 "seek_hole": false, 00:10:17.114 "seek_data": false, 00:10:17.114 "copy": true, 00:10:17.114 "nvme_iov_md": false 00:10:17.114 }, 00:10:17.114 "memory_domains": [ 00:10:17.114 { 00:10:17.114 "dma_device_id": "system", 00:10:17.114 "dma_device_type": 1 00:10:17.114 }, 00:10:17.114 { 00:10:17.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.114 "dma_device_type": 2 00:10:17.114 } 00:10:17.114 ], 00:10:17.114 "driver_specific": {} 00:10:17.114 } 00:10:17.114 ] 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.114 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.114 "name": "Existed_Raid", 00:10:17.114 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:17.114 "strip_size_kb": 0, 00:10:17.114 "state": "online", 00:10:17.114 "raid_level": "raid1", 00:10:17.114 "superblock": true, 00:10:17.114 "num_base_bdevs": 3, 00:10:17.114 "num_base_bdevs_discovered": 3, 00:10:17.114 "num_base_bdevs_operational": 3, 00:10:17.114 "base_bdevs_list": [ 00:10:17.114 { 00:10:17.114 "name": "NewBaseBdev", 00:10:17.114 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:17.114 "is_configured": true, 00:10:17.114 "data_offset": 2048, 00:10:17.114 "data_size": 63488 00:10:17.114 }, 00:10:17.114 { 00:10:17.114 "name": "BaseBdev2", 00:10:17.114 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:17.114 "is_configured": true, 00:10:17.115 "data_offset": 2048, 00:10:17.115 "data_size": 63488 00:10:17.115 }, 00:10:17.115 { 00:10:17.115 "name": "BaseBdev3", 00:10:17.115 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:17.115 "is_configured": true, 00:10:17.115 "data_offset": 2048, 00:10:17.115 "data_size": 63488 00:10:17.115 } 00:10:17.115 ] 00:10:17.115 }' 00:10:17.115 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.115 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.684 [2024-11-19 10:04:31.668723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.684 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.684 "name": "Existed_Raid", 00:10:17.684 "aliases": [ 00:10:17.684 "786c0a58-9d99-4f5e-86ca-38b098f5c08c" 00:10:17.684 ], 00:10:17.684 "product_name": "Raid Volume", 00:10:17.684 "block_size": 512, 00:10:17.684 "num_blocks": 63488, 00:10:17.684 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:17.684 "assigned_rate_limits": { 00:10:17.684 "rw_ios_per_sec": 0, 00:10:17.684 "rw_mbytes_per_sec": 0, 00:10:17.684 "r_mbytes_per_sec": 0, 00:10:17.684 "w_mbytes_per_sec": 0 00:10:17.684 }, 00:10:17.684 "claimed": false, 00:10:17.684 "zoned": false, 00:10:17.684 "supported_io_types": { 00:10:17.684 "read": true, 00:10:17.684 "write": true, 00:10:17.684 "unmap": false, 00:10:17.684 "flush": false, 00:10:17.684 "reset": true, 00:10:17.684 "nvme_admin": false, 00:10:17.684 "nvme_io": false, 00:10:17.684 "nvme_io_md": false, 00:10:17.684 "write_zeroes": true, 00:10:17.684 "zcopy": false, 00:10:17.684 "get_zone_info": false, 00:10:17.684 "zone_management": false, 00:10:17.684 "zone_append": false, 00:10:17.684 "compare": false, 00:10:17.684 "compare_and_write": false, 00:10:17.684 "abort": false, 00:10:17.684 "seek_hole": false, 00:10:17.684 "seek_data": false, 00:10:17.684 "copy": false, 00:10:17.684 "nvme_iov_md": false 00:10:17.684 }, 00:10:17.684 "memory_domains": [ 00:10:17.684 { 00:10:17.684 "dma_device_id": "system", 00:10:17.684 "dma_device_type": 1 00:10:17.684 }, 00:10:17.684 { 00:10:17.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.684 "dma_device_type": 2 00:10:17.684 }, 00:10:17.684 { 00:10:17.684 "dma_device_id": "system", 00:10:17.684 "dma_device_type": 1 00:10:17.684 }, 00:10:17.684 { 00:10:17.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.684 "dma_device_type": 2 00:10:17.684 }, 00:10:17.684 { 00:10:17.684 "dma_device_id": "system", 00:10:17.684 "dma_device_type": 1 00:10:17.684 }, 00:10:17.684 { 00:10:17.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.684 "dma_device_type": 2 00:10:17.684 } 00:10:17.684 ], 00:10:17.684 "driver_specific": { 00:10:17.684 "raid": { 00:10:17.685 "uuid": "786c0a58-9d99-4f5e-86ca-38b098f5c08c", 00:10:17.685 "strip_size_kb": 0, 00:10:17.685 "state": "online", 00:10:17.685 "raid_level": "raid1", 00:10:17.685 "superblock": true, 00:10:17.685 "num_base_bdevs": 3, 00:10:17.685 "num_base_bdevs_discovered": 3, 00:10:17.685 "num_base_bdevs_operational": 3, 00:10:17.685 "base_bdevs_list": [ 00:10:17.685 { 00:10:17.685 "name": "NewBaseBdev", 00:10:17.685 "uuid": "3cf38dcd-2fdb-432e-bbdb-4e86b83b2af4", 00:10:17.685 "is_configured": true, 00:10:17.685 "data_offset": 2048, 00:10:17.685 "data_size": 63488 00:10:17.685 }, 00:10:17.685 { 00:10:17.685 "name": "BaseBdev2", 00:10:17.685 "uuid": "b6ac1817-5698-4924-85cd-576c6c7fe3f7", 00:10:17.685 "is_configured": true, 00:10:17.685 "data_offset": 2048, 00:10:17.685 "data_size": 63488 00:10:17.685 }, 00:10:17.685 { 00:10:17.685 "name": "BaseBdev3", 00:10:17.685 "uuid": "553beb0b-66bc-4ade-95d5-afef4b1ce6a8", 00:10:17.685 "is_configured": true, 00:10:17.685 "data_offset": 2048, 00:10:17.685 "data_size": 63488 00:10:17.685 } 00:10:17.685 ] 00:10:17.685 } 00:10:17.685 } 00:10:17.685 }' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.685 BaseBdev2 00:10:17.685 BaseBdev3' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.685 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.944 [2024-11-19 10:04:31.956424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.944 [2024-11-19 10:04:31.956475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.944 [2024-11-19 10:04:31.956591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.944 [2024-11-19 10:04:31.957004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.944 [2024-11-19 10:04:31.957032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67952 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67952 ']' 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67952 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67952 00:10:17.944 killing process with pid 67952 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67952' 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67952 00:10:17.944 [2024-11-19 10:04:31.995287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.944 10:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67952 00:10:18.203 [2024-11-19 10:04:32.292394] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.196 10:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.196 00:10:19.196 real 0m11.839s 00:10:19.196 user 0m19.371s 00:10:19.196 sys 0m1.747s 00:10:19.196 10:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.196 10:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.196 ************************************ 00:10:19.196 END TEST raid_state_function_test_sb 00:10:19.196 ************************************ 00:10:19.455 10:04:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:19.455 10:04:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.455 10:04:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.455 10:04:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.455 ************************************ 00:10:19.455 START TEST raid_superblock_test 00:10:19.455 ************************************ 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68583 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68583 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68583 ']' 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.455 10:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.455 [2024-11-19 10:04:33.575194] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:19.455 [2024-11-19 10:04:33.575475] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68583 ] 00:10:19.714 [2024-11-19 10:04:33.760971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.714 [2024-11-19 10:04:33.909412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.972 [2024-11-19 10:04:34.135307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.972 [2024-11-19 10:04:34.135391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.539 malloc1 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.539 [2024-11-19 10:04:34.737679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.539 [2024-11-19 10:04:34.737816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.539 [2024-11-19 10:04:34.737859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.539 [2024-11-19 10:04:34.737877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.539 [2024-11-19 10:04:34.741143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.539 [2024-11-19 10:04:34.741201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.539 pt1 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.539 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.798 malloc2 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.798 [2024-11-19 10:04:34.798089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.798 [2024-11-19 10:04:34.798197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.798 [2024-11-19 10:04:34.798239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.798 [2024-11-19 10:04:34.798255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.798 [2024-11-19 10:04:34.801489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.798 [2024-11-19 10:04:34.801548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.798 pt2 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.798 malloc3 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.798 [2024-11-19 10:04:34.867461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.798 [2024-11-19 10:04:34.867562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.798 [2024-11-19 10:04:34.867602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.798 [2024-11-19 10:04:34.867618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.798 [2024-11-19 10:04:34.870859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.798 [2024-11-19 10:04:34.870914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.798 pt3 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.798 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.798 [2024-11-19 10:04:34.879813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.798 [2024-11-19 10:04:34.882593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.798 [2024-11-19 10:04:34.882717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.798 [2024-11-19 10:04:34.882997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.798 [2024-11-19 10:04:34.883037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.798 [2024-11-19 10:04:34.883435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.799 [2024-11-19 10:04:34.883697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.799 [2024-11-19 10:04:34.883727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.799 [2024-11-19 10:04:34.884082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.799 "name": "raid_bdev1", 00:10:20.799 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:20.799 "strip_size_kb": 0, 00:10:20.799 "state": "online", 00:10:20.799 "raid_level": "raid1", 00:10:20.799 "superblock": true, 00:10:20.799 "num_base_bdevs": 3, 00:10:20.799 "num_base_bdevs_discovered": 3, 00:10:20.799 "num_base_bdevs_operational": 3, 00:10:20.799 "base_bdevs_list": [ 00:10:20.799 { 00:10:20.799 "name": "pt1", 00:10:20.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.799 "is_configured": true, 00:10:20.799 "data_offset": 2048, 00:10:20.799 "data_size": 63488 00:10:20.799 }, 00:10:20.799 { 00:10:20.799 "name": "pt2", 00:10:20.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.799 "is_configured": true, 00:10:20.799 "data_offset": 2048, 00:10:20.799 "data_size": 63488 00:10:20.799 }, 00:10:20.799 { 00:10:20.799 "name": "pt3", 00:10:20.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.799 "is_configured": true, 00:10:20.799 "data_offset": 2048, 00:10:20.799 "data_size": 63488 00:10:20.799 } 00:10:20.799 ] 00:10:20.799 }' 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.799 10:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.366 [2024-11-19 10:04:35.405057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.366 "name": "raid_bdev1", 00:10:21.366 "aliases": [ 00:10:21.366 "e99289be-bab3-4de9-85f0-c6adf1a8138c" 00:10:21.366 ], 00:10:21.366 "product_name": "Raid Volume", 00:10:21.366 "block_size": 512, 00:10:21.366 "num_blocks": 63488, 00:10:21.366 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:21.366 "assigned_rate_limits": { 00:10:21.366 "rw_ios_per_sec": 0, 00:10:21.366 "rw_mbytes_per_sec": 0, 00:10:21.366 "r_mbytes_per_sec": 0, 00:10:21.366 "w_mbytes_per_sec": 0 00:10:21.366 }, 00:10:21.366 "claimed": false, 00:10:21.366 "zoned": false, 00:10:21.366 "supported_io_types": { 00:10:21.366 "read": true, 00:10:21.366 "write": true, 00:10:21.366 "unmap": false, 00:10:21.366 "flush": false, 00:10:21.366 "reset": true, 00:10:21.366 "nvme_admin": false, 00:10:21.366 "nvme_io": false, 00:10:21.366 "nvme_io_md": false, 00:10:21.366 "write_zeroes": true, 00:10:21.366 "zcopy": false, 00:10:21.366 "get_zone_info": false, 00:10:21.366 "zone_management": false, 00:10:21.366 "zone_append": false, 00:10:21.366 "compare": false, 00:10:21.366 "compare_and_write": false, 00:10:21.366 "abort": false, 00:10:21.366 "seek_hole": false, 00:10:21.366 "seek_data": false, 00:10:21.366 "copy": false, 00:10:21.366 "nvme_iov_md": false 00:10:21.366 }, 00:10:21.366 "memory_domains": [ 00:10:21.366 { 00:10:21.366 "dma_device_id": "system", 00:10:21.366 "dma_device_type": 1 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.366 "dma_device_type": 2 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "dma_device_id": "system", 00:10:21.366 "dma_device_type": 1 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.366 "dma_device_type": 2 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "dma_device_id": "system", 00:10:21.366 "dma_device_type": 1 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.366 "dma_device_type": 2 00:10:21.366 } 00:10:21.366 ], 00:10:21.366 "driver_specific": { 00:10:21.366 "raid": { 00:10:21.366 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:21.366 "strip_size_kb": 0, 00:10:21.366 "state": "online", 00:10:21.366 "raid_level": "raid1", 00:10:21.366 "superblock": true, 00:10:21.366 "num_base_bdevs": 3, 00:10:21.366 "num_base_bdevs_discovered": 3, 00:10:21.366 "num_base_bdevs_operational": 3, 00:10:21.366 "base_bdevs_list": [ 00:10:21.366 { 00:10:21.366 "name": "pt1", 00:10:21.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.366 "is_configured": true, 00:10:21.366 "data_offset": 2048, 00:10:21.366 "data_size": 63488 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "name": "pt2", 00:10:21.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.366 "is_configured": true, 00:10:21.366 "data_offset": 2048, 00:10:21.366 "data_size": 63488 00:10:21.366 }, 00:10:21.366 { 00:10:21.366 "name": "pt3", 00:10:21.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.366 "is_configured": true, 00:10:21.366 "data_offset": 2048, 00:10:21.366 "data_size": 63488 00:10:21.366 } 00:10:21.366 ] 00:10:21.366 } 00:10:21.366 } 00:10:21.366 }' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.366 pt2 00:10:21.366 pt3' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.366 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.367 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 [2024-11-19 10:04:35.685096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e99289be-bab3-4de9-85f0-c6adf1a8138c 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e99289be-bab3-4de9-85f0-c6adf1a8138c ']' 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 [2024-11-19 10:04:35.728750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.626 [2024-11-19 10:04:35.728808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.626 [2024-11-19 10:04:35.728950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.626 [2024-11-19 10:04:35.729065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.626 [2024-11-19 10:04:35.729093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.626 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.626 [2024-11-19 10:04:35.856885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.884 [2024-11-19 10:04:35.859739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.884 [2024-11-19 10:04:35.859849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:21.884 [2024-11-19 10:04:35.859945] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.884 [2024-11-19 10:04:35.860042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.884 [2024-11-19 10:04:35.860077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:21.884 [2024-11-19 10:04:35.860110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.884 [2024-11-19 10:04:35.860126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:21.884 request: 00:10:21.884 { 00:10:21.884 "name": "raid_bdev1", 00:10:21.884 "raid_level": "raid1", 00:10:21.884 "base_bdevs": [ 00:10:21.884 "malloc1", 00:10:21.884 "malloc2", 00:10:21.884 "malloc3" 00:10:21.884 ], 00:10:21.884 "superblock": false, 00:10:21.884 "method": "bdev_raid_create", 00:10:21.884 "req_id": 1 00:10:21.884 } 00:10:21.884 Got JSON-RPC error response 00:10:21.884 response: 00:10:21.884 { 00:10:21.884 "code": -17, 00:10:21.884 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.884 } 00:10:21.884 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.885 [2024-11-19 10:04:35.916989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.885 [2024-11-19 10:04:35.917089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.885 [2024-11-19 10:04:35.917132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:21.885 [2024-11-19 10:04:35.917149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.885 [2024-11-19 10:04:35.920405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.885 [2024-11-19 10:04:35.920455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.885 [2024-11-19 10:04:35.920588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.885 [2024-11-19 10:04:35.920666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.885 pt1 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.885 "name": "raid_bdev1", 00:10:21.885 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:21.885 "strip_size_kb": 0, 00:10:21.885 "state": "configuring", 00:10:21.885 "raid_level": "raid1", 00:10:21.885 "superblock": true, 00:10:21.885 "num_base_bdevs": 3, 00:10:21.885 "num_base_bdevs_discovered": 1, 00:10:21.885 "num_base_bdevs_operational": 3, 00:10:21.885 "base_bdevs_list": [ 00:10:21.885 { 00:10:21.885 "name": "pt1", 00:10:21.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.885 "is_configured": true, 00:10:21.885 "data_offset": 2048, 00:10:21.885 "data_size": 63488 00:10:21.885 }, 00:10:21.885 { 00:10:21.885 "name": null, 00:10:21.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.885 "is_configured": false, 00:10:21.885 "data_offset": 2048, 00:10:21.885 "data_size": 63488 00:10:21.885 }, 00:10:21.885 { 00:10:21.885 "name": null, 00:10:21.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.885 "is_configured": false, 00:10:21.885 "data_offset": 2048, 00:10:21.885 "data_size": 63488 00:10:21.885 } 00:10:21.885 ] 00:10:21.885 }' 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.885 10:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.451 [2024-11-19 10:04:36.445185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.451 [2024-11-19 10:04:36.445276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.451 [2024-11-19 10:04:36.445319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:22.451 [2024-11-19 10:04:36.445336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.451 [2024-11-19 10:04:36.446009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.451 [2024-11-19 10:04:36.446050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.451 [2024-11-19 10:04:36.446180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.451 [2024-11-19 10:04:36.446216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.451 pt2 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.451 [2024-11-19 10:04:36.453142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.451 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.451 "name": "raid_bdev1", 00:10:22.452 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:22.452 "strip_size_kb": 0, 00:10:22.452 "state": "configuring", 00:10:22.452 "raid_level": "raid1", 00:10:22.452 "superblock": true, 00:10:22.452 "num_base_bdevs": 3, 00:10:22.452 "num_base_bdevs_discovered": 1, 00:10:22.452 "num_base_bdevs_operational": 3, 00:10:22.452 "base_bdevs_list": [ 00:10:22.452 { 00:10:22.452 "name": "pt1", 00:10:22.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.452 "is_configured": true, 00:10:22.452 "data_offset": 2048, 00:10:22.452 "data_size": 63488 00:10:22.452 }, 00:10:22.452 { 00:10:22.452 "name": null, 00:10:22.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.452 "is_configured": false, 00:10:22.452 "data_offset": 0, 00:10:22.452 "data_size": 63488 00:10:22.452 }, 00:10:22.452 { 00:10:22.452 "name": null, 00:10:22.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.452 "is_configured": false, 00:10:22.452 "data_offset": 2048, 00:10:22.452 "data_size": 63488 00:10:22.452 } 00:10:22.452 ] 00:10:22.452 }' 00:10:22.452 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.452 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 [2024-11-19 10:04:36.913256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.710 [2024-11-19 10:04:36.913364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.710 [2024-11-19 10:04:36.913397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:22.710 [2024-11-19 10:04:36.913417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.710 [2024-11-19 10:04:36.914099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.710 [2024-11-19 10:04:36.914139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.710 [2024-11-19 10:04:36.914256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.710 [2024-11-19 10:04:36.914315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.710 pt2 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 [2024-11-19 10:04:36.921259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.710 [2024-11-19 10:04:36.921338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.710 [2024-11-19 10:04:36.921373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.710 [2024-11-19 10:04:36.921395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.710 [2024-11-19 10:04:36.922014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.710 [2024-11-19 10:04:36.922065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.710 [2024-11-19 10:04:36.922185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.710 [2024-11-19 10:04:36.922225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.710 [2024-11-19 10:04:36.922405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.710 [2024-11-19 10:04:36.922440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.710 [2024-11-19 10:04:36.922760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.710 [2024-11-19 10:04:36.923006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.710 [2024-11-19 10:04:36.923033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:22.710 [2024-11-19 10:04:36.923228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.710 pt3 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.968 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.968 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.968 "name": "raid_bdev1", 00:10:22.968 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:22.968 "strip_size_kb": 0, 00:10:22.968 "state": "online", 00:10:22.968 "raid_level": "raid1", 00:10:22.968 "superblock": true, 00:10:22.968 "num_base_bdevs": 3, 00:10:22.968 "num_base_bdevs_discovered": 3, 00:10:22.968 "num_base_bdevs_operational": 3, 00:10:22.968 "base_bdevs_list": [ 00:10:22.968 { 00:10:22.968 "name": "pt1", 00:10:22.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.968 "is_configured": true, 00:10:22.968 "data_offset": 2048, 00:10:22.968 "data_size": 63488 00:10:22.968 }, 00:10:22.968 { 00:10:22.968 "name": "pt2", 00:10:22.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.968 "is_configured": true, 00:10:22.968 "data_offset": 2048, 00:10:22.968 "data_size": 63488 00:10:22.968 }, 00:10:22.968 { 00:10:22.968 "name": "pt3", 00:10:22.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.968 "is_configured": true, 00:10:22.968 "data_offset": 2048, 00:10:22.968 "data_size": 63488 00:10:22.968 } 00:10:22.968 ] 00:10:22.968 }' 00:10:22.968 10:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.968 10:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.230 [2024-11-19 10:04:37.393823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.230 "name": "raid_bdev1", 00:10:23.230 "aliases": [ 00:10:23.230 "e99289be-bab3-4de9-85f0-c6adf1a8138c" 00:10:23.230 ], 00:10:23.230 "product_name": "Raid Volume", 00:10:23.230 "block_size": 512, 00:10:23.230 "num_blocks": 63488, 00:10:23.230 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:23.230 "assigned_rate_limits": { 00:10:23.230 "rw_ios_per_sec": 0, 00:10:23.230 "rw_mbytes_per_sec": 0, 00:10:23.230 "r_mbytes_per_sec": 0, 00:10:23.230 "w_mbytes_per_sec": 0 00:10:23.230 }, 00:10:23.230 "claimed": false, 00:10:23.230 "zoned": false, 00:10:23.230 "supported_io_types": { 00:10:23.230 "read": true, 00:10:23.230 "write": true, 00:10:23.230 "unmap": false, 00:10:23.230 "flush": false, 00:10:23.230 "reset": true, 00:10:23.230 "nvme_admin": false, 00:10:23.230 "nvme_io": false, 00:10:23.230 "nvme_io_md": false, 00:10:23.230 "write_zeroes": true, 00:10:23.230 "zcopy": false, 00:10:23.230 "get_zone_info": false, 00:10:23.230 "zone_management": false, 00:10:23.230 "zone_append": false, 00:10:23.230 "compare": false, 00:10:23.230 "compare_and_write": false, 00:10:23.230 "abort": false, 00:10:23.230 "seek_hole": false, 00:10:23.230 "seek_data": false, 00:10:23.230 "copy": false, 00:10:23.230 "nvme_iov_md": false 00:10:23.230 }, 00:10:23.230 "memory_domains": [ 00:10:23.230 { 00:10:23.230 "dma_device_id": "system", 00:10:23.230 "dma_device_type": 1 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.230 "dma_device_type": 2 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "dma_device_id": "system", 00:10:23.230 "dma_device_type": 1 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.230 "dma_device_type": 2 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "dma_device_id": "system", 00:10:23.230 "dma_device_type": 1 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.230 "dma_device_type": 2 00:10:23.230 } 00:10:23.230 ], 00:10:23.230 "driver_specific": { 00:10:23.230 "raid": { 00:10:23.230 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:23.230 "strip_size_kb": 0, 00:10:23.230 "state": "online", 00:10:23.230 "raid_level": "raid1", 00:10:23.230 "superblock": true, 00:10:23.230 "num_base_bdevs": 3, 00:10:23.230 "num_base_bdevs_discovered": 3, 00:10:23.230 "num_base_bdevs_operational": 3, 00:10:23.230 "base_bdevs_list": [ 00:10:23.230 { 00:10:23.230 "name": "pt1", 00:10:23.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.230 "is_configured": true, 00:10:23.230 "data_offset": 2048, 00:10:23.230 "data_size": 63488 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "name": "pt2", 00:10:23.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.230 "is_configured": true, 00:10:23.230 "data_offset": 2048, 00:10:23.230 "data_size": 63488 00:10:23.230 }, 00:10:23.230 { 00:10:23.230 "name": "pt3", 00:10:23.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.230 "is_configured": true, 00:10:23.230 "data_offset": 2048, 00:10:23.230 "data_size": 63488 00:10:23.230 } 00:10:23.230 ] 00:10:23.230 } 00:10:23.230 } 00:10:23.230 }' 00:10:23.230 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.500 pt2 00:10:23.500 pt3' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.500 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.758 [2024-11-19 10:04:37.769958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e99289be-bab3-4de9-85f0-c6adf1a8138c '!=' e99289be-bab3-4de9-85f0-c6adf1a8138c ']' 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.758 [2024-11-19 10:04:37.821643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.758 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.758 "name": "raid_bdev1", 00:10:23.758 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:23.758 "strip_size_kb": 0, 00:10:23.758 "state": "online", 00:10:23.758 "raid_level": "raid1", 00:10:23.758 "superblock": true, 00:10:23.758 "num_base_bdevs": 3, 00:10:23.759 "num_base_bdevs_discovered": 2, 00:10:23.759 "num_base_bdevs_operational": 2, 00:10:23.759 "base_bdevs_list": [ 00:10:23.759 { 00:10:23.759 "name": null, 00:10:23.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.759 "is_configured": false, 00:10:23.759 "data_offset": 0, 00:10:23.759 "data_size": 63488 00:10:23.759 }, 00:10:23.759 { 00:10:23.759 "name": "pt2", 00:10:23.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.759 "is_configured": true, 00:10:23.759 "data_offset": 2048, 00:10:23.759 "data_size": 63488 00:10:23.759 }, 00:10:23.759 { 00:10:23.759 "name": "pt3", 00:10:23.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.759 "is_configured": true, 00:10:23.759 "data_offset": 2048, 00:10:23.759 "data_size": 63488 00:10:23.759 } 00:10:23.759 ] 00:10:23.759 }' 00:10:23.759 10:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.759 10:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.324 [2024-11-19 10:04:38.393715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.324 [2024-11-19 10:04:38.393762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.324 [2024-11-19 10:04:38.393906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.324 [2024-11-19 10:04:38.394005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.324 [2024-11-19 10:04:38.394041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.324 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.325 [2024-11-19 10:04:38.473724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.325 [2024-11-19 10:04:38.473864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.325 [2024-11-19 10:04:38.473899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:24.325 [2024-11-19 10:04:38.473918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.325 [2024-11-19 10:04:38.477170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.325 [2024-11-19 10:04:38.477224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.325 [2024-11-19 10:04:38.477359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.325 [2024-11-19 10:04:38.477440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.325 pt2 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.325 "name": "raid_bdev1", 00:10:24.325 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:24.325 "strip_size_kb": 0, 00:10:24.325 "state": "configuring", 00:10:24.325 "raid_level": "raid1", 00:10:24.325 "superblock": true, 00:10:24.325 "num_base_bdevs": 3, 00:10:24.325 "num_base_bdevs_discovered": 1, 00:10:24.325 "num_base_bdevs_operational": 2, 00:10:24.325 "base_bdevs_list": [ 00:10:24.325 { 00:10:24.325 "name": null, 00:10:24.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.325 "is_configured": false, 00:10:24.325 "data_offset": 2048, 00:10:24.325 "data_size": 63488 00:10:24.325 }, 00:10:24.325 { 00:10:24.325 "name": "pt2", 00:10:24.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.325 "is_configured": true, 00:10:24.325 "data_offset": 2048, 00:10:24.325 "data_size": 63488 00:10:24.325 }, 00:10:24.325 { 00:10:24.325 "name": null, 00:10:24.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.325 "is_configured": false, 00:10:24.325 "data_offset": 2048, 00:10:24.325 "data_size": 63488 00:10:24.325 } 00:10:24.325 ] 00:10:24.325 }' 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.325 10:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.893 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:24.893 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:24.893 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:24.893 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.893 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.893 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.893 [2024-11-19 10:04:39.029947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.893 [2024-11-19 10:04:39.030061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.893 [2024-11-19 10:04:39.030099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:24.893 [2024-11-19 10:04:39.030119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.893 [2024-11-19 10:04:39.030826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.893 [2024-11-19 10:04:39.030865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.893 [2024-11-19 10:04:39.031001] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.893 [2024-11-19 10:04:39.031048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.893 [2024-11-19 10:04:39.031210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.894 [2024-11-19 10:04:39.031232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.894 [2024-11-19 10:04:39.031573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:24.894 [2024-11-19 10:04:39.031807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.894 [2024-11-19 10:04:39.031832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:24.894 [2024-11-19 10:04:39.032064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.894 pt3 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.894 "name": "raid_bdev1", 00:10:24.894 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:24.894 "strip_size_kb": 0, 00:10:24.894 "state": "online", 00:10:24.894 "raid_level": "raid1", 00:10:24.894 "superblock": true, 00:10:24.894 "num_base_bdevs": 3, 00:10:24.894 "num_base_bdevs_discovered": 2, 00:10:24.894 "num_base_bdevs_operational": 2, 00:10:24.894 "base_bdevs_list": [ 00:10:24.894 { 00:10:24.894 "name": null, 00:10:24.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.894 "is_configured": false, 00:10:24.894 "data_offset": 2048, 00:10:24.894 "data_size": 63488 00:10:24.894 }, 00:10:24.894 { 00:10:24.894 "name": "pt2", 00:10:24.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.894 "is_configured": true, 00:10:24.894 "data_offset": 2048, 00:10:24.894 "data_size": 63488 00:10:24.894 }, 00:10:24.894 { 00:10:24.894 "name": "pt3", 00:10:24.894 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.894 "is_configured": true, 00:10:24.894 "data_offset": 2048, 00:10:24.894 "data_size": 63488 00:10:24.894 } 00:10:24.894 ] 00:10:24.894 }' 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.894 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.461 [2024-11-19 10:04:39.582056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.461 [2024-11-19 10:04:39.582122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.461 [2024-11-19 10:04:39.582244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.461 [2024-11-19 10:04:39.582349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.461 [2024-11-19 10:04:39.582374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.461 [2024-11-19 10:04:39.650130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.461 [2024-11-19 10:04:39.650227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.461 [2024-11-19 10:04:39.650268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:25.461 [2024-11-19 10:04:39.650285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.461 [2024-11-19 10:04:39.653707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.461 [2024-11-19 10:04:39.653774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.461 [2024-11-19 10:04:39.653957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.461 [2024-11-19 10:04:39.654029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.461 [2024-11-19 10:04:39.654227] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:25.461 [2024-11-19 10:04:39.654262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.461 [2024-11-19 10:04:39.654308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:25.461 [2024-11-19 10:04:39.654442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.461 pt1 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.461 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.719 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.719 "name": "raid_bdev1", 00:10:25.719 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:25.719 "strip_size_kb": 0, 00:10:25.719 "state": "configuring", 00:10:25.719 "raid_level": "raid1", 00:10:25.719 "superblock": true, 00:10:25.719 "num_base_bdevs": 3, 00:10:25.719 "num_base_bdevs_discovered": 1, 00:10:25.719 "num_base_bdevs_operational": 2, 00:10:25.719 "base_bdevs_list": [ 00:10:25.719 { 00:10:25.719 "name": null, 00:10:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.719 "is_configured": false, 00:10:25.719 "data_offset": 2048, 00:10:25.719 "data_size": 63488 00:10:25.719 }, 00:10:25.719 { 00:10:25.719 "name": "pt2", 00:10:25.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.719 "is_configured": true, 00:10:25.719 "data_offset": 2048, 00:10:25.719 "data_size": 63488 00:10:25.719 }, 00:10:25.719 { 00:10:25.719 "name": null, 00:10:25.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.720 "is_configured": false, 00:10:25.720 "data_offset": 2048, 00:10:25.720 "data_size": 63488 00:10:25.720 } 00:10:25.720 ] 00:10:25.720 }' 00:10:25.720 10:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.720 10:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.978 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:25.978 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.978 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.978 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:25.978 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.236 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:26.236 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.236 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.236 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.236 [2024-11-19 10:04:40.222589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.236 [2024-11-19 10:04:40.222696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.236 [2024-11-19 10:04:40.222736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:26.236 [2024-11-19 10:04:40.222754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.236 [2024-11-19 10:04:40.223438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.236 [2024-11-19 10:04:40.223474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.236 [2024-11-19 10:04:40.223636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.236 [2024-11-19 10:04:40.223725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.236 [2024-11-19 10:04:40.223981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:26.236 [2024-11-19 10:04:40.224005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.236 [2024-11-19 10:04:40.224411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:26.236 [2024-11-19 10:04:40.224686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:26.236 [2024-11-19 10:04:40.224724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:26.237 [2024-11-19 10:04:40.224984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.237 pt3 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.237 "name": "raid_bdev1", 00:10:26.237 "uuid": "e99289be-bab3-4de9-85f0-c6adf1a8138c", 00:10:26.237 "strip_size_kb": 0, 00:10:26.237 "state": "online", 00:10:26.237 "raid_level": "raid1", 00:10:26.237 "superblock": true, 00:10:26.237 "num_base_bdevs": 3, 00:10:26.237 "num_base_bdevs_discovered": 2, 00:10:26.237 "num_base_bdevs_operational": 2, 00:10:26.237 "base_bdevs_list": [ 00:10:26.237 { 00:10:26.237 "name": null, 00:10:26.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.237 "is_configured": false, 00:10:26.237 "data_offset": 2048, 00:10:26.237 "data_size": 63488 00:10:26.237 }, 00:10:26.237 { 00:10:26.237 "name": "pt2", 00:10:26.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.237 "is_configured": true, 00:10:26.237 "data_offset": 2048, 00:10:26.237 "data_size": 63488 00:10:26.237 }, 00:10:26.237 { 00:10:26.237 "name": "pt3", 00:10:26.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.237 "is_configured": true, 00:10:26.237 "data_offset": 2048, 00:10:26.237 "data_size": 63488 00:10:26.237 } 00:10:26.237 ] 00:10:26.237 }' 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.237 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.861 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:26.861 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:26.861 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.861 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.861 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.861 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.862 [2024-11-19 10:04:40.823129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e99289be-bab3-4de9-85f0-c6adf1a8138c '!=' e99289be-bab3-4de9-85f0-c6adf1a8138c ']' 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68583 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68583 ']' 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68583 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68583 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.862 killing process with pid 68583 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68583' 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68583 00:10:26.862 [2024-11-19 10:04:40.895046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.862 10:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68583 00:10:26.862 [2024-11-19 10:04:40.895199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.862 [2024-11-19 10:04:40.895308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.862 [2024-11-19 10:04:40.895331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:27.120 [2024-11-19 10:04:41.192817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.497 10:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:28.497 00:10:28.497 real 0m8.843s 00:10:28.497 user 0m14.323s 00:10:28.497 sys 0m1.278s 00:10:28.497 10:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.497 10:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.497 ************************************ 00:10:28.497 END TEST raid_superblock_test 00:10:28.497 ************************************ 00:10:28.497 10:04:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:28.497 10:04:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.497 10:04:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.497 10:04:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.497 ************************************ 00:10:28.497 START TEST raid_read_error_test 00:10:28.497 ************************************ 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ku1QlK7NnX 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69040 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69040 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69040 ']' 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.497 10:04:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.497 [2024-11-19 10:04:42.477901] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:28.497 [2024-11-19 10:04:42.478075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69040 ] 00:10:28.497 [2024-11-19 10:04:42.657047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.756 [2024-11-19 10:04:42.804971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.014 [2024-11-19 10:04:43.033497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.014 [2024-11-19 10:04:43.033577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.273 BaseBdev1_malloc 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.273 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 true 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 [2024-11-19 10:04:43.515009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.532 [2024-11-19 10:04:43.515083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.532 [2024-11-19 10:04:43.515114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.532 [2024-11-19 10:04:43.515134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.532 [2024-11-19 10:04:43.518188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.532 [2024-11-19 10:04:43.518238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.532 BaseBdev1 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 BaseBdev2_malloc 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 true 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 [2024-11-19 10:04:43.579625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.532 [2024-11-19 10:04:43.579699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.532 [2024-11-19 10:04:43.579732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.532 [2024-11-19 10:04:43.579751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.532 [2024-11-19 10:04:43.582804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.532 [2024-11-19 10:04:43.582849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.532 BaseBdev2 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 BaseBdev3_malloc 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 true 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.532 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.532 [2024-11-19 10:04:43.660516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.532 [2024-11-19 10:04:43.660618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.532 [2024-11-19 10:04:43.660658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.533 [2024-11-19 10:04:43.660677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.533 [2024-11-19 10:04:43.664175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.533 [2024-11-19 10:04:43.664249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.533 BaseBdev3 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.533 [2024-11-19 10:04:43.672814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.533 [2024-11-19 10:04:43.675705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.533 [2024-11-19 10:04:43.675856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.533 [2024-11-19 10:04:43.676337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.533 [2024-11-19 10:04:43.676370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.533 [2024-11-19 10:04:43.676853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:29.533 [2024-11-19 10:04:43.677199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.533 [2024-11-19 10:04:43.677233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:29.533 [2024-11-19 10:04:43.677590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.533 "name": "raid_bdev1", 00:10:29.533 "uuid": "437d02ac-79a1-40aa-9845-cdd54cec72f4", 00:10:29.533 "strip_size_kb": 0, 00:10:29.533 "state": "online", 00:10:29.533 "raid_level": "raid1", 00:10:29.533 "superblock": true, 00:10:29.533 "num_base_bdevs": 3, 00:10:29.533 "num_base_bdevs_discovered": 3, 00:10:29.533 "num_base_bdevs_operational": 3, 00:10:29.533 "base_bdevs_list": [ 00:10:29.533 { 00:10:29.533 "name": "BaseBdev1", 00:10:29.533 "uuid": "deed2d4c-2963-52cc-8c6c-dc66cea71036", 00:10:29.533 "is_configured": true, 00:10:29.533 "data_offset": 2048, 00:10:29.533 "data_size": 63488 00:10:29.533 }, 00:10:29.533 { 00:10:29.533 "name": "BaseBdev2", 00:10:29.533 "uuid": "c5c5f206-9707-5dfe-82ca-c92caeaeabc7", 00:10:29.533 "is_configured": true, 00:10:29.533 "data_offset": 2048, 00:10:29.533 "data_size": 63488 00:10:29.533 }, 00:10:29.533 { 00:10:29.533 "name": "BaseBdev3", 00:10:29.533 "uuid": "3c95f700-7015-5314-9d2d-07aabf4babed", 00:10:29.533 "is_configured": true, 00:10:29.533 "data_offset": 2048, 00:10:29.533 "data_size": 63488 00:10:29.533 } 00:10:29.533 ] 00:10:29.533 }' 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.533 10:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.100 10:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.100 10:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.100 [2024-11-19 10:04:44.299239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.035 "name": "raid_bdev1", 00:10:31.035 "uuid": "437d02ac-79a1-40aa-9845-cdd54cec72f4", 00:10:31.035 "strip_size_kb": 0, 00:10:31.035 "state": "online", 00:10:31.035 "raid_level": "raid1", 00:10:31.035 "superblock": true, 00:10:31.035 "num_base_bdevs": 3, 00:10:31.035 "num_base_bdevs_discovered": 3, 00:10:31.035 "num_base_bdevs_operational": 3, 00:10:31.035 "base_bdevs_list": [ 00:10:31.035 { 00:10:31.035 "name": "BaseBdev1", 00:10:31.035 "uuid": "deed2d4c-2963-52cc-8c6c-dc66cea71036", 00:10:31.035 "is_configured": true, 00:10:31.035 "data_offset": 2048, 00:10:31.035 "data_size": 63488 00:10:31.035 }, 00:10:31.035 { 00:10:31.035 "name": "BaseBdev2", 00:10:31.035 "uuid": "c5c5f206-9707-5dfe-82ca-c92caeaeabc7", 00:10:31.035 "is_configured": true, 00:10:31.035 "data_offset": 2048, 00:10:31.035 "data_size": 63488 00:10:31.035 }, 00:10:31.035 { 00:10:31.035 "name": "BaseBdev3", 00:10:31.035 "uuid": "3c95f700-7015-5314-9d2d-07aabf4babed", 00:10:31.035 "is_configured": true, 00:10:31.035 "data_offset": 2048, 00:10:31.035 "data_size": 63488 00:10:31.035 } 00:10:31.035 ] 00:10:31.035 }' 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.035 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.600 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.600 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.600 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.600 [2024-11-19 10:04:45.741707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.600 [2024-11-19 10:04:45.741758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.600 [2024-11-19 10:04:45.745327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.600 [2024-11-19 10:04:45.745398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.600 [2024-11-19 10:04:45.745547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.600 [2024-11-19 10:04:45.745569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:31.600 { 00:10:31.600 "results": [ 00:10:31.600 { 00:10:31.600 "job": "raid_bdev1", 00:10:31.600 "core_mask": "0x1", 00:10:31.600 "workload": "randrw", 00:10:31.600 "percentage": 50, 00:10:31.600 "status": "finished", 00:10:31.601 "queue_depth": 1, 00:10:31.601 "io_size": 131072, 00:10:31.601 "runtime": 1.439626, 00:10:31.601 "iops": 7378.999823565287, 00:10:31.601 "mibps": 922.3749779456609, 00:10:31.601 "io_failed": 0, 00:10:31.601 "io_timeout": 0, 00:10:31.601 "avg_latency_us": 131.17880961549983, 00:10:31.601 "min_latency_us": 45.38181818181818, 00:10:31.601 "max_latency_us": 2383.1272727272726 00:10:31.601 } 00:10:31.601 ], 00:10:31.601 "core_count": 1 00:10:31.601 } 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69040 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69040 ']' 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69040 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69040 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.601 killing process with pid 69040 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69040' 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69040 00:10:31.601 10:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69040 00:10:31.601 [2024-11-19 10:04:45.783742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.857 [2024-11-19 10:04:46.014621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ku1QlK7NnX 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:33.236 00:10:33.236 real 0m4.869s 00:10:33.236 user 0m5.930s 00:10:33.236 sys 0m0.639s 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.236 10:04:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.236 ************************************ 00:10:33.236 END TEST raid_read_error_test 00:10:33.236 ************************************ 00:10:33.236 10:04:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:33.236 10:04:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.236 10:04:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.236 10:04:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.236 ************************************ 00:10:33.236 START TEST raid_write_error_test 00:10:33.236 ************************************ 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hcgewGH4Rb 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69186 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69186 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69186 ']' 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.236 10:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.236 [2024-11-19 10:04:47.408749] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:33.236 [2024-11-19 10:04:47.408935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69186 ] 00:10:33.494 [2024-11-19 10:04:47.587282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.752 [2024-11-19 10:04:47.735557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.752 [2024-11-19 10:04:47.965899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.752 [2024-11-19 10:04:47.966018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.328 BaseBdev1_malloc 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.328 true 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.328 [2024-11-19 10:04:48.494470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.328 [2024-11-19 10:04:48.494549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.328 [2024-11-19 10:04:48.494584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.328 [2024-11-19 10:04:48.494603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.328 [2024-11-19 10:04:48.497685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.328 [2024-11-19 10:04:48.497740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.328 BaseBdev1 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.328 BaseBdev2_malloc 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.328 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.627 true 00:10:34.627 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.627 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.627 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.627 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.627 [2024-11-19 10:04:48.558889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.627 [2024-11-19 10:04:48.559117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.627 [2024-11-19 10:04:48.559158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.627 [2024-11-19 10:04:48.559178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.628 [2024-11-19 10:04:48.562312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.628 [2024-11-19 10:04:48.562485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.628 BaseBdev2 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.628 BaseBdev3_malloc 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.628 true 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.628 [2024-11-19 10:04:48.632480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:34.628 [2024-11-19 10:04:48.632702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.628 [2024-11-19 10:04:48.632746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:34.628 [2024-11-19 10:04:48.632767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.628 [2024-11-19 10:04:48.635925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.628 [2024-11-19 10:04:48.635996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:34.628 BaseBdev3 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.628 [2024-11-19 10:04:48.640765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.628 [2024-11-19 10:04:48.643455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.628 [2024-11-19 10:04:48.643709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.628 [2024-11-19 10:04:48.644083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.628 [2024-11-19 10:04:48.644105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.628 [2024-11-19 10:04:48.644476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:34.628 [2024-11-19 10:04:48.644734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.628 [2024-11-19 10:04:48.644756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:34.628 [2024-11-19 10:04:48.645072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.628 "name": "raid_bdev1", 00:10:34.628 "uuid": "93768064-e6ea-43f7-8cd8-29ad7e179809", 00:10:34.628 "strip_size_kb": 0, 00:10:34.628 "state": "online", 00:10:34.628 "raid_level": "raid1", 00:10:34.628 "superblock": true, 00:10:34.628 "num_base_bdevs": 3, 00:10:34.628 "num_base_bdevs_discovered": 3, 00:10:34.628 "num_base_bdevs_operational": 3, 00:10:34.628 "base_bdevs_list": [ 00:10:34.628 { 00:10:34.628 "name": "BaseBdev1", 00:10:34.628 "uuid": "2fe7182a-04a5-5ace-a8f8-d24c68ff909a", 00:10:34.628 "is_configured": true, 00:10:34.628 "data_offset": 2048, 00:10:34.628 "data_size": 63488 00:10:34.628 }, 00:10:34.628 { 00:10:34.628 "name": "BaseBdev2", 00:10:34.628 "uuid": "e5eff38b-649f-584f-a69e-359d9e853060", 00:10:34.628 "is_configured": true, 00:10:34.628 "data_offset": 2048, 00:10:34.628 "data_size": 63488 00:10:34.628 }, 00:10:34.628 { 00:10:34.628 "name": "BaseBdev3", 00:10:34.628 "uuid": "806dcee6-f1bd-5c68-8c32-f420dff05add", 00:10:34.628 "is_configured": true, 00:10:34.628 "data_offset": 2048, 00:10:34.628 "data_size": 63488 00:10:34.628 } 00:10:34.628 ] 00:10:34.628 }' 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.628 10:04:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.193 10:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.193 10:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.193 [2024-11-19 10:04:49.274807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.128 [2024-11-19 10:04:50.171926] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:36.128 [2024-11-19 10:04:50.172012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.128 [2024-11-19 10:04:50.172302] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.128 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.129 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.129 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.129 "name": "raid_bdev1", 00:10:36.129 "uuid": "93768064-e6ea-43f7-8cd8-29ad7e179809", 00:10:36.129 "strip_size_kb": 0, 00:10:36.129 "state": "online", 00:10:36.129 "raid_level": "raid1", 00:10:36.129 "superblock": true, 00:10:36.129 "num_base_bdevs": 3, 00:10:36.129 "num_base_bdevs_discovered": 2, 00:10:36.129 "num_base_bdevs_operational": 2, 00:10:36.129 "base_bdevs_list": [ 00:10:36.129 { 00:10:36.129 "name": null, 00:10:36.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.129 "is_configured": false, 00:10:36.129 "data_offset": 0, 00:10:36.129 "data_size": 63488 00:10:36.129 }, 00:10:36.129 { 00:10:36.129 "name": "BaseBdev2", 00:10:36.129 "uuid": "e5eff38b-649f-584f-a69e-359d9e853060", 00:10:36.129 "is_configured": true, 00:10:36.129 "data_offset": 2048, 00:10:36.129 "data_size": 63488 00:10:36.129 }, 00:10:36.129 { 00:10:36.129 "name": "BaseBdev3", 00:10:36.129 "uuid": "806dcee6-f1bd-5c68-8c32-f420dff05add", 00:10:36.129 "is_configured": true, 00:10:36.129 "data_offset": 2048, 00:10:36.129 "data_size": 63488 00:10:36.129 } 00:10:36.129 ] 00:10:36.129 }' 00:10:36.129 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.129 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.697 [2024-11-19 10:04:50.743401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.697 [2024-11-19 10:04:50.743664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.697 [2024-11-19 10:04:50.747261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.697 [2024-11-19 10:04:50.747536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.697 [2024-11-19 10:04:50.747807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.697 [2024-11-19 10:04:50.747999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:10:36.697 "results": [ 00:10:36.697 { 00:10:36.697 "job": "raid_bdev1", 00:10:36.697 "core_mask": "0x1", 00:10:36.697 "workload": "randrw", 00:10:36.697 "percentage": 50, 00:10:36.697 "status": "finished", 00:10:36.697 "queue_depth": 1, 00:10:36.697 "io_size": 131072, 00:10:36.697 "runtime": 1.466282, 00:10:36.697 "iops": 8353.099881196114, 00:10:36.697 "mibps": 1044.1374851495143, 00:10:36.697 "io_failed": 0, 00:10:36.697 "io_timeout": 0, 00:10:36.697 "avg_latency_us": 115.33888011400748, 00:10:36.697 "min_latency_us": 45.14909090909091, 00:10:36.697 "max_latency_us": 2100.130909090909 00:10:36.697 } 00:10:36.697 ], 00:10:36.697 "core_count": 1 00:10:36.697 } 00:10:36.697 te offline 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69186 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69186 ']' 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69186 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69186 00:10:36.697 killing process with pid 69186 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69186' 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69186 00:10:36.697 10:04:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69186 00:10:36.697 [2024-11-19 10:04:50.794336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.956 [2024-11-19 10:04:51.024861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.332 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.332 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hcgewGH4Rb 00:10:38.332 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.332 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:38.333 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:38.333 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.333 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.333 10:04:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:38.333 00:10:38.333 real 0m4.943s 00:10:38.333 user 0m6.087s 00:10:38.333 sys 0m0.631s 00:10:38.333 10:04:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.333 ************************************ 00:10:38.333 END TEST raid_write_error_test 00:10:38.333 ************************************ 00:10:38.333 10:04:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.333 10:04:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:38.333 10:04:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:38.333 10:04:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:38.333 10:04:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.333 10:04:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.333 10:04:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.333 ************************************ 00:10:38.333 START TEST raid_state_function_test 00:10:38.333 ************************************ 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69335 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69335' 00:10:38.333 Process raid pid: 69335 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69335 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69335 ']' 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.333 10:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.333 [2024-11-19 10:04:52.391890] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:38.333 [2024-11-19 10:04:52.392338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.592 [2024-11-19 10:04:52.574600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.592 [2024-11-19 10:04:52.728239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.851 [2024-11-19 10:04:52.962947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.851 [2024-11-19 10:04:52.963304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.419 [2024-11-19 10:04:53.432622] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.419 [2024-11-19 10:04:53.432850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.419 [2024-11-19 10:04:53.432879] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.419 [2024-11-19 10:04:53.432899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.419 [2024-11-19 10:04:53.432910] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.419 [2024-11-19 10:04:53.432924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.419 [2024-11-19 10:04:53.432934] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.419 [2024-11-19 10:04:53.432950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.419 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.419 "name": "Existed_Raid", 00:10:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.419 "strip_size_kb": 64, 00:10:39.419 "state": "configuring", 00:10:39.419 "raid_level": "raid0", 00:10:39.419 "superblock": false, 00:10:39.419 "num_base_bdevs": 4, 00:10:39.419 "num_base_bdevs_discovered": 0, 00:10:39.419 "num_base_bdevs_operational": 4, 00:10:39.419 "base_bdevs_list": [ 00:10:39.419 { 00:10:39.419 "name": "BaseBdev1", 00:10:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.419 "is_configured": false, 00:10:39.419 "data_offset": 0, 00:10:39.419 "data_size": 0 00:10:39.419 }, 00:10:39.419 { 00:10:39.419 "name": "BaseBdev2", 00:10:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.419 "is_configured": false, 00:10:39.420 "data_offset": 0, 00:10:39.420 "data_size": 0 00:10:39.420 }, 00:10:39.420 { 00:10:39.420 "name": "BaseBdev3", 00:10:39.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.420 "is_configured": false, 00:10:39.420 "data_offset": 0, 00:10:39.420 "data_size": 0 00:10:39.420 }, 00:10:39.420 { 00:10:39.420 "name": "BaseBdev4", 00:10:39.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.420 "is_configured": false, 00:10:39.420 "data_offset": 0, 00:10:39.420 "data_size": 0 00:10:39.420 } 00:10:39.420 ] 00:10:39.420 }' 00:10:39.420 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.420 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 [2024-11-19 10:04:53.956717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.987 [2024-11-19 10:04:53.956773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 [2024-11-19 10:04:53.964691] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.987 [2024-11-19 10:04:53.964896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.987 [2024-11-19 10:04:53.964923] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.987 [2024-11-19 10:04:53.964942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.987 [2024-11-19 10:04:53.964952] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.987 [2024-11-19 10:04:53.964967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.987 [2024-11-19 10:04:53.964976] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.987 [2024-11-19 10:04:53.964991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.987 10:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 [2024-11-19 10:04:54.013503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.987 BaseBdev1 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.987 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 [ 00:10:39.987 { 00:10:39.987 "name": "BaseBdev1", 00:10:39.987 "aliases": [ 00:10:39.987 "304933ee-428b-4da8-bc67-cda6bd625235" 00:10:39.987 ], 00:10:39.987 "product_name": "Malloc disk", 00:10:39.987 "block_size": 512, 00:10:39.987 "num_blocks": 65536, 00:10:39.987 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:39.987 "assigned_rate_limits": { 00:10:39.987 "rw_ios_per_sec": 0, 00:10:39.987 "rw_mbytes_per_sec": 0, 00:10:39.987 "r_mbytes_per_sec": 0, 00:10:39.987 "w_mbytes_per_sec": 0 00:10:39.987 }, 00:10:39.987 "claimed": true, 00:10:39.987 "claim_type": "exclusive_write", 00:10:39.987 "zoned": false, 00:10:39.987 "supported_io_types": { 00:10:39.987 "read": true, 00:10:39.987 "write": true, 00:10:39.987 "unmap": true, 00:10:39.987 "flush": true, 00:10:39.987 "reset": true, 00:10:39.987 "nvme_admin": false, 00:10:39.987 "nvme_io": false, 00:10:39.987 "nvme_io_md": false, 00:10:39.987 "write_zeroes": true, 00:10:39.987 "zcopy": true, 00:10:39.987 "get_zone_info": false, 00:10:39.987 "zone_management": false, 00:10:39.987 "zone_append": false, 00:10:39.987 "compare": false, 00:10:39.987 "compare_and_write": false, 00:10:39.987 "abort": true, 00:10:39.987 "seek_hole": false, 00:10:39.987 "seek_data": false, 00:10:39.987 "copy": true, 00:10:39.987 "nvme_iov_md": false 00:10:39.987 }, 00:10:39.987 "memory_domains": [ 00:10:39.987 { 00:10:39.987 "dma_device_id": "system", 00:10:39.988 "dma_device_type": 1 00:10:39.988 }, 00:10:39.988 { 00:10:39.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.988 "dma_device_type": 2 00:10:39.988 } 00:10:39.988 ], 00:10:39.988 "driver_specific": {} 00:10:39.988 } 00:10:39.988 ] 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.988 "name": "Existed_Raid", 00:10:39.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.988 "strip_size_kb": 64, 00:10:39.988 "state": "configuring", 00:10:39.988 "raid_level": "raid0", 00:10:39.988 "superblock": false, 00:10:39.988 "num_base_bdevs": 4, 00:10:39.988 "num_base_bdevs_discovered": 1, 00:10:39.988 "num_base_bdevs_operational": 4, 00:10:39.988 "base_bdevs_list": [ 00:10:39.988 { 00:10:39.988 "name": "BaseBdev1", 00:10:39.988 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:39.988 "is_configured": true, 00:10:39.988 "data_offset": 0, 00:10:39.988 "data_size": 65536 00:10:39.988 }, 00:10:39.988 { 00:10:39.988 "name": "BaseBdev2", 00:10:39.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.988 "is_configured": false, 00:10:39.988 "data_offset": 0, 00:10:39.988 "data_size": 0 00:10:39.988 }, 00:10:39.988 { 00:10:39.988 "name": "BaseBdev3", 00:10:39.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.988 "is_configured": false, 00:10:39.988 "data_offset": 0, 00:10:39.988 "data_size": 0 00:10:39.988 }, 00:10:39.988 { 00:10:39.988 "name": "BaseBdev4", 00:10:39.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.988 "is_configured": false, 00:10:39.988 "data_offset": 0, 00:10:39.988 "data_size": 0 00:10:39.988 } 00:10:39.988 ] 00:10:39.988 }' 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.988 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.555 [2024-11-19 10:04:54.533708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.555 [2024-11-19 10:04:54.533800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.555 [2024-11-19 10:04:54.541756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.555 [2024-11-19 10:04:54.544423] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.555 [2024-11-19 10:04:54.544616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.555 [2024-11-19 10:04:54.544644] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.555 [2024-11-19 10:04:54.544665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.555 [2024-11-19 10:04:54.544676] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.555 [2024-11-19 10:04:54.544690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.555 "name": "Existed_Raid", 00:10:40.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.555 "strip_size_kb": 64, 00:10:40.555 "state": "configuring", 00:10:40.555 "raid_level": "raid0", 00:10:40.555 "superblock": false, 00:10:40.555 "num_base_bdevs": 4, 00:10:40.555 "num_base_bdevs_discovered": 1, 00:10:40.555 "num_base_bdevs_operational": 4, 00:10:40.555 "base_bdevs_list": [ 00:10:40.555 { 00:10:40.555 "name": "BaseBdev1", 00:10:40.555 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:40.555 "is_configured": true, 00:10:40.555 "data_offset": 0, 00:10:40.555 "data_size": 65536 00:10:40.555 }, 00:10:40.555 { 00:10:40.555 "name": "BaseBdev2", 00:10:40.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.555 "is_configured": false, 00:10:40.555 "data_offset": 0, 00:10:40.555 "data_size": 0 00:10:40.555 }, 00:10:40.555 { 00:10:40.555 "name": "BaseBdev3", 00:10:40.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.555 "is_configured": false, 00:10:40.555 "data_offset": 0, 00:10:40.555 "data_size": 0 00:10:40.555 }, 00:10:40.555 { 00:10:40.555 "name": "BaseBdev4", 00:10:40.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.555 "is_configured": false, 00:10:40.555 "data_offset": 0, 00:10:40.555 "data_size": 0 00:10:40.555 } 00:10:40.555 ] 00:10:40.555 }' 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.555 10:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 [2024-11-19 10:04:55.152680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.123 BaseBdev2 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.123 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 [ 00:10:41.123 { 00:10:41.123 "name": "BaseBdev2", 00:10:41.123 "aliases": [ 00:10:41.124 "74ad2da7-464b-4f05-9cfd-c587b31124ff" 00:10:41.124 ], 00:10:41.124 "product_name": "Malloc disk", 00:10:41.124 "block_size": 512, 00:10:41.124 "num_blocks": 65536, 00:10:41.124 "uuid": "74ad2da7-464b-4f05-9cfd-c587b31124ff", 00:10:41.124 "assigned_rate_limits": { 00:10:41.124 "rw_ios_per_sec": 0, 00:10:41.124 "rw_mbytes_per_sec": 0, 00:10:41.124 "r_mbytes_per_sec": 0, 00:10:41.124 "w_mbytes_per_sec": 0 00:10:41.124 }, 00:10:41.124 "claimed": true, 00:10:41.124 "claim_type": "exclusive_write", 00:10:41.124 "zoned": false, 00:10:41.124 "supported_io_types": { 00:10:41.124 "read": true, 00:10:41.124 "write": true, 00:10:41.124 "unmap": true, 00:10:41.124 "flush": true, 00:10:41.124 "reset": true, 00:10:41.124 "nvme_admin": false, 00:10:41.124 "nvme_io": false, 00:10:41.124 "nvme_io_md": false, 00:10:41.124 "write_zeroes": true, 00:10:41.124 "zcopy": true, 00:10:41.124 "get_zone_info": false, 00:10:41.124 "zone_management": false, 00:10:41.124 "zone_append": false, 00:10:41.124 "compare": false, 00:10:41.124 "compare_and_write": false, 00:10:41.124 "abort": true, 00:10:41.124 "seek_hole": false, 00:10:41.124 "seek_data": false, 00:10:41.124 "copy": true, 00:10:41.124 "nvme_iov_md": false 00:10:41.124 }, 00:10:41.124 "memory_domains": [ 00:10:41.124 { 00:10:41.124 "dma_device_id": "system", 00:10:41.124 "dma_device_type": 1 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.124 "dma_device_type": 2 00:10:41.124 } 00:10:41.124 ], 00:10:41.124 "driver_specific": {} 00:10:41.124 } 00:10:41.124 ] 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.124 "name": "Existed_Raid", 00:10:41.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.124 "strip_size_kb": 64, 00:10:41.124 "state": "configuring", 00:10:41.124 "raid_level": "raid0", 00:10:41.124 "superblock": false, 00:10:41.124 "num_base_bdevs": 4, 00:10:41.124 "num_base_bdevs_discovered": 2, 00:10:41.124 "num_base_bdevs_operational": 4, 00:10:41.124 "base_bdevs_list": [ 00:10:41.124 { 00:10:41.124 "name": "BaseBdev1", 00:10:41.124 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:41.124 "is_configured": true, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 65536 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "name": "BaseBdev2", 00:10:41.124 "uuid": "74ad2da7-464b-4f05-9cfd-c587b31124ff", 00:10:41.124 "is_configured": true, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 65536 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "name": "BaseBdev3", 00:10:41.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.124 "is_configured": false, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 0 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "name": "BaseBdev4", 00:10:41.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.124 "is_configured": false, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 0 00:10:41.124 } 00:10:41.124 ] 00:10:41.124 }' 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.124 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.736 [2024-11-19 10:04:55.805590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.736 BaseBdev3 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.736 [ 00:10:41.736 { 00:10:41.736 "name": "BaseBdev3", 00:10:41.736 "aliases": [ 00:10:41.736 "b524eb06-4439-4238-9955-00750a9b42ff" 00:10:41.736 ], 00:10:41.736 "product_name": "Malloc disk", 00:10:41.736 "block_size": 512, 00:10:41.736 "num_blocks": 65536, 00:10:41.736 "uuid": "b524eb06-4439-4238-9955-00750a9b42ff", 00:10:41.736 "assigned_rate_limits": { 00:10:41.736 "rw_ios_per_sec": 0, 00:10:41.736 "rw_mbytes_per_sec": 0, 00:10:41.736 "r_mbytes_per_sec": 0, 00:10:41.736 "w_mbytes_per_sec": 0 00:10:41.736 }, 00:10:41.736 "claimed": true, 00:10:41.736 "claim_type": "exclusive_write", 00:10:41.736 "zoned": false, 00:10:41.736 "supported_io_types": { 00:10:41.736 "read": true, 00:10:41.736 "write": true, 00:10:41.736 "unmap": true, 00:10:41.736 "flush": true, 00:10:41.736 "reset": true, 00:10:41.736 "nvme_admin": false, 00:10:41.736 "nvme_io": false, 00:10:41.736 "nvme_io_md": false, 00:10:41.736 "write_zeroes": true, 00:10:41.736 "zcopy": true, 00:10:41.736 "get_zone_info": false, 00:10:41.736 "zone_management": false, 00:10:41.736 "zone_append": false, 00:10:41.736 "compare": false, 00:10:41.736 "compare_and_write": false, 00:10:41.736 "abort": true, 00:10:41.736 "seek_hole": false, 00:10:41.736 "seek_data": false, 00:10:41.736 "copy": true, 00:10:41.736 "nvme_iov_md": false 00:10:41.736 }, 00:10:41.736 "memory_domains": [ 00:10:41.736 { 00:10:41.736 "dma_device_id": "system", 00:10:41.736 "dma_device_type": 1 00:10:41.736 }, 00:10:41.736 { 00:10:41.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.736 "dma_device_type": 2 00:10:41.736 } 00:10:41.736 ], 00:10:41.736 "driver_specific": {} 00:10:41.736 } 00:10:41.736 ] 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.736 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.737 "name": "Existed_Raid", 00:10:41.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.737 "strip_size_kb": 64, 00:10:41.737 "state": "configuring", 00:10:41.737 "raid_level": "raid0", 00:10:41.737 "superblock": false, 00:10:41.737 "num_base_bdevs": 4, 00:10:41.737 "num_base_bdevs_discovered": 3, 00:10:41.737 "num_base_bdevs_operational": 4, 00:10:41.737 "base_bdevs_list": [ 00:10:41.737 { 00:10:41.737 "name": "BaseBdev1", 00:10:41.737 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:41.737 "is_configured": true, 00:10:41.737 "data_offset": 0, 00:10:41.737 "data_size": 65536 00:10:41.737 }, 00:10:41.737 { 00:10:41.737 "name": "BaseBdev2", 00:10:41.737 "uuid": "74ad2da7-464b-4f05-9cfd-c587b31124ff", 00:10:41.737 "is_configured": true, 00:10:41.737 "data_offset": 0, 00:10:41.737 "data_size": 65536 00:10:41.737 }, 00:10:41.737 { 00:10:41.737 "name": "BaseBdev3", 00:10:41.737 "uuid": "b524eb06-4439-4238-9955-00750a9b42ff", 00:10:41.737 "is_configured": true, 00:10:41.737 "data_offset": 0, 00:10:41.737 "data_size": 65536 00:10:41.737 }, 00:10:41.737 { 00:10:41.737 "name": "BaseBdev4", 00:10:41.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.737 "is_configured": false, 00:10:41.737 "data_offset": 0, 00:10:41.737 "data_size": 0 00:10:41.737 } 00:10:41.737 ] 00:10:41.737 }' 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.737 10:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.304 [2024-11-19 10:04:56.396832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.304 [2024-11-19 10:04:56.396920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.304 [2024-11-19 10:04:56.396936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:42.304 [2024-11-19 10:04:56.397312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:42.304 [2024-11-19 10:04:56.397557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.304 [2024-11-19 10:04:56.397589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:42.304 [2024-11-19 10:04:56.397983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.304 BaseBdev4 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.304 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.304 [ 00:10:42.304 { 00:10:42.304 "name": "BaseBdev4", 00:10:42.304 "aliases": [ 00:10:42.304 "b8e8d54b-ae67-4050-b93c-fefb74a33bb3" 00:10:42.304 ], 00:10:42.304 "product_name": "Malloc disk", 00:10:42.304 "block_size": 512, 00:10:42.304 "num_blocks": 65536, 00:10:42.304 "uuid": "b8e8d54b-ae67-4050-b93c-fefb74a33bb3", 00:10:42.304 "assigned_rate_limits": { 00:10:42.304 "rw_ios_per_sec": 0, 00:10:42.304 "rw_mbytes_per_sec": 0, 00:10:42.304 "r_mbytes_per_sec": 0, 00:10:42.305 "w_mbytes_per_sec": 0 00:10:42.305 }, 00:10:42.305 "claimed": true, 00:10:42.305 "claim_type": "exclusive_write", 00:10:42.305 "zoned": false, 00:10:42.305 "supported_io_types": { 00:10:42.305 "read": true, 00:10:42.305 "write": true, 00:10:42.305 "unmap": true, 00:10:42.305 "flush": true, 00:10:42.305 "reset": true, 00:10:42.305 "nvme_admin": false, 00:10:42.305 "nvme_io": false, 00:10:42.305 "nvme_io_md": false, 00:10:42.305 "write_zeroes": true, 00:10:42.305 "zcopy": true, 00:10:42.305 "get_zone_info": false, 00:10:42.305 "zone_management": false, 00:10:42.305 "zone_append": false, 00:10:42.305 "compare": false, 00:10:42.305 "compare_and_write": false, 00:10:42.305 "abort": true, 00:10:42.305 "seek_hole": false, 00:10:42.305 "seek_data": false, 00:10:42.305 "copy": true, 00:10:42.305 "nvme_iov_md": false 00:10:42.305 }, 00:10:42.305 "memory_domains": [ 00:10:42.305 { 00:10:42.305 "dma_device_id": "system", 00:10:42.305 "dma_device_type": 1 00:10:42.305 }, 00:10:42.305 { 00:10:42.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.305 "dma_device_type": 2 00:10:42.305 } 00:10:42.305 ], 00:10:42.305 "driver_specific": {} 00:10:42.305 } 00:10:42.305 ] 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.305 "name": "Existed_Raid", 00:10:42.305 "uuid": "3c474625-e824-4773-9455-662699854563", 00:10:42.305 "strip_size_kb": 64, 00:10:42.305 "state": "online", 00:10:42.305 "raid_level": "raid0", 00:10:42.305 "superblock": false, 00:10:42.305 "num_base_bdevs": 4, 00:10:42.305 "num_base_bdevs_discovered": 4, 00:10:42.305 "num_base_bdevs_operational": 4, 00:10:42.305 "base_bdevs_list": [ 00:10:42.305 { 00:10:42.305 "name": "BaseBdev1", 00:10:42.305 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:42.305 "is_configured": true, 00:10:42.305 "data_offset": 0, 00:10:42.305 "data_size": 65536 00:10:42.305 }, 00:10:42.305 { 00:10:42.305 "name": "BaseBdev2", 00:10:42.305 "uuid": "74ad2da7-464b-4f05-9cfd-c587b31124ff", 00:10:42.305 "is_configured": true, 00:10:42.305 "data_offset": 0, 00:10:42.305 "data_size": 65536 00:10:42.305 }, 00:10:42.305 { 00:10:42.305 "name": "BaseBdev3", 00:10:42.305 "uuid": "b524eb06-4439-4238-9955-00750a9b42ff", 00:10:42.305 "is_configured": true, 00:10:42.305 "data_offset": 0, 00:10:42.305 "data_size": 65536 00:10:42.305 }, 00:10:42.305 { 00:10:42.305 "name": "BaseBdev4", 00:10:42.305 "uuid": "b8e8d54b-ae67-4050-b93c-fefb74a33bb3", 00:10:42.305 "is_configured": true, 00:10:42.305 "data_offset": 0, 00:10:42.305 "data_size": 65536 00:10:42.305 } 00:10:42.305 ] 00:10:42.305 }' 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.305 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.872 [2024-11-19 10:04:56.889646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.872 "name": "Existed_Raid", 00:10:42.872 "aliases": [ 00:10:42.872 "3c474625-e824-4773-9455-662699854563" 00:10:42.872 ], 00:10:42.872 "product_name": "Raid Volume", 00:10:42.872 "block_size": 512, 00:10:42.872 "num_blocks": 262144, 00:10:42.872 "uuid": "3c474625-e824-4773-9455-662699854563", 00:10:42.872 "assigned_rate_limits": { 00:10:42.872 "rw_ios_per_sec": 0, 00:10:42.872 "rw_mbytes_per_sec": 0, 00:10:42.872 "r_mbytes_per_sec": 0, 00:10:42.872 "w_mbytes_per_sec": 0 00:10:42.872 }, 00:10:42.872 "claimed": false, 00:10:42.872 "zoned": false, 00:10:42.872 "supported_io_types": { 00:10:42.872 "read": true, 00:10:42.872 "write": true, 00:10:42.872 "unmap": true, 00:10:42.872 "flush": true, 00:10:42.872 "reset": true, 00:10:42.872 "nvme_admin": false, 00:10:42.872 "nvme_io": false, 00:10:42.872 "nvme_io_md": false, 00:10:42.872 "write_zeroes": true, 00:10:42.872 "zcopy": false, 00:10:42.872 "get_zone_info": false, 00:10:42.872 "zone_management": false, 00:10:42.872 "zone_append": false, 00:10:42.872 "compare": false, 00:10:42.872 "compare_and_write": false, 00:10:42.872 "abort": false, 00:10:42.872 "seek_hole": false, 00:10:42.872 "seek_data": false, 00:10:42.872 "copy": false, 00:10:42.872 "nvme_iov_md": false 00:10:42.872 }, 00:10:42.872 "memory_domains": [ 00:10:42.872 { 00:10:42.872 "dma_device_id": "system", 00:10:42.872 "dma_device_type": 1 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.872 "dma_device_type": 2 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "system", 00:10:42.872 "dma_device_type": 1 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.872 "dma_device_type": 2 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "system", 00:10:42.872 "dma_device_type": 1 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.872 "dma_device_type": 2 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "system", 00:10:42.872 "dma_device_type": 1 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.872 "dma_device_type": 2 00:10:42.872 } 00:10:42.872 ], 00:10:42.872 "driver_specific": { 00:10:42.872 "raid": { 00:10:42.872 "uuid": "3c474625-e824-4773-9455-662699854563", 00:10:42.872 "strip_size_kb": 64, 00:10:42.872 "state": "online", 00:10:42.872 "raid_level": "raid0", 00:10:42.872 "superblock": false, 00:10:42.872 "num_base_bdevs": 4, 00:10:42.872 "num_base_bdevs_discovered": 4, 00:10:42.872 "num_base_bdevs_operational": 4, 00:10:42.872 "base_bdevs_list": [ 00:10:42.872 { 00:10:42.872 "name": "BaseBdev1", 00:10:42.872 "uuid": "304933ee-428b-4da8-bc67-cda6bd625235", 00:10:42.872 "is_configured": true, 00:10:42.872 "data_offset": 0, 00:10:42.872 "data_size": 65536 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "name": "BaseBdev2", 00:10:42.872 "uuid": "74ad2da7-464b-4f05-9cfd-c587b31124ff", 00:10:42.872 "is_configured": true, 00:10:42.872 "data_offset": 0, 00:10:42.872 "data_size": 65536 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "name": "BaseBdev3", 00:10:42.872 "uuid": "b524eb06-4439-4238-9955-00750a9b42ff", 00:10:42.872 "is_configured": true, 00:10:42.872 "data_offset": 0, 00:10:42.872 "data_size": 65536 00:10:42.872 }, 00:10:42.872 { 00:10:42.872 "name": "BaseBdev4", 00:10:42.872 "uuid": "b8e8d54b-ae67-4050-b93c-fefb74a33bb3", 00:10:42.872 "is_configured": true, 00:10:42.872 "data_offset": 0, 00:10:42.872 "data_size": 65536 00:10:42.872 } 00:10:42.872 ] 00:10:42.872 } 00:10:42.872 } 00:10:42.872 }' 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:42.872 BaseBdev2 00:10:42.872 BaseBdev3 00:10:42.872 BaseBdev4' 00:10:42.872 10:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.872 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.873 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.873 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.131 [2024-11-19 10:04:57.209420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.131 [2024-11-19 10:04:57.209473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.131 [2024-11-19 10:04:57.209554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.131 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.390 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.390 "name": "Existed_Raid", 00:10:43.390 "uuid": "3c474625-e824-4773-9455-662699854563", 00:10:43.390 "strip_size_kb": 64, 00:10:43.390 "state": "offline", 00:10:43.390 "raid_level": "raid0", 00:10:43.390 "superblock": false, 00:10:43.390 "num_base_bdevs": 4, 00:10:43.390 "num_base_bdevs_discovered": 3, 00:10:43.390 "num_base_bdevs_operational": 3, 00:10:43.390 "base_bdevs_list": [ 00:10:43.390 { 00:10:43.390 "name": null, 00:10:43.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.390 "is_configured": false, 00:10:43.390 "data_offset": 0, 00:10:43.390 "data_size": 65536 00:10:43.390 }, 00:10:43.390 { 00:10:43.390 "name": "BaseBdev2", 00:10:43.390 "uuid": "74ad2da7-464b-4f05-9cfd-c587b31124ff", 00:10:43.390 "is_configured": true, 00:10:43.390 "data_offset": 0, 00:10:43.390 "data_size": 65536 00:10:43.390 }, 00:10:43.390 { 00:10:43.390 "name": "BaseBdev3", 00:10:43.390 "uuid": "b524eb06-4439-4238-9955-00750a9b42ff", 00:10:43.390 "is_configured": true, 00:10:43.390 "data_offset": 0, 00:10:43.390 "data_size": 65536 00:10:43.390 }, 00:10:43.390 { 00:10:43.390 "name": "BaseBdev4", 00:10:43.390 "uuid": "b8e8d54b-ae67-4050-b93c-fefb74a33bb3", 00:10:43.390 "is_configured": true, 00:10:43.390 "data_offset": 0, 00:10:43.390 "data_size": 65536 00:10:43.390 } 00:10:43.390 ] 00:10:43.390 }' 00:10:43.390 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.390 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.648 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.906 [2024-11-19 10:04:57.884080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.906 10:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.906 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.906 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.906 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:43.906 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.906 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.906 [2024-11-19 10:04:58.053808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.165 [2024-11-19 10:04:58.215984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:44.165 [2024-11-19 10:04:58.216101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.165 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.424 BaseBdev2 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.424 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.424 [ 00:10:44.424 { 00:10:44.424 "name": "BaseBdev2", 00:10:44.424 "aliases": [ 00:10:44.424 "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc" 00:10:44.424 ], 00:10:44.424 "product_name": "Malloc disk", 00:10:44.424 "block_size": 512, 00:10:44.424 "num_blocks": 65536, 00:10:44.424 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:44.424 "assigned_rate_limits": { 00:10:44.424 "rw_ios_per_sec": 0, 00:10:44.424 "rw_mbytes_per_sec": 0, 00:10:44.424 "r_mbytes_per_sec": 0, 00:10:44.424 "w_mbytes_per_sec": 0 00:10:44.424 }, 00:10:44.424 "claimed": false, 00:10:44.424 "zoned": false, 00:10:44.424 "supported_io_types": { 00:10:44.424 "read": true, 00:10:44.424 "write": true, 00:10:44.424 "unmap": true, 00:10:44.424 "flush": true, 00:10:44.424 "reset": true, 00:10:44.424 "nvme_admin": false, 00:10:44.424 "nvme_io": false, 00:10:44.424 "nvme_io_md": false, 00:10:44.424 "write_zeroes": true, 00:10:44.424 "zcopy": true, 00:10:44.424 "get_zone_info": false, 00:10:44.424 "zone_management": false, 00:10:44.424 "zone_append": false, 00:10:44.424 "compare": false, 00:10:44.424 "compare_and_write": false, 00:10:44.424 "abort": true, 00:10:44.424 "seek_hole": false, 00:10:44.424 "seek_data": false, 00:10:44.424 "copy": true, 00:10:44.424 "nvme_iov_md": false 00:10:44.424 }, 00:10:44.424 "memory_domains": [ 00:10:44.424 { 00:10:44.424 "dma_device_id": "system", 00:10:44.424 "dma_device_type": 1 00:10:44.424 }, 00:10:44.424 { 00:10:44.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.424 "dma_device_type": 2 00:10:44.424 } 00:10:44.424 ], 00:10:44.424 "driver_specific": {} 00:10:44.424 } 00:10:44.424 ] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 BaseBdev3 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 [ 00:10:44.425 { 00:10:44.425 "name": "BaseBdev3", 00:10:44.425 "aliases": [ 00:10:44.425 "80d547c8-2cb7-4975-98f3-b7da6d95ed4b" 00:10:44.425 ], 00:10:44.425 "product_name": "Malloc disk", 00:10:44.425 "block_size": 512, 00:10:44.425 "num_blocks": 65536, 00:10:44.425 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:44.425 "assigned_rate_limits": { 00:10:44.425 "rw_ios_per_sec": 0, 00:10:44.425 "rw_mbytes_per_sec": 0, 00:10:44.425 "r_mbytes_per_sec": 0, 00:10:44.425 "w_mbytes_per_sec": 0 00:10:44.425 }, 00:10:44.425 "claimed": false, 00:10:44.425 "zoned": false, 00:10:44.425 "supported_io_types": { 00:10:44.425 "read": true, 00:10:44.425 "write": true, 00:10:44.425 "unmap": true, 00:10:44.425 "flush": true, 00:10:44.425 "reset": true, 00:10:44.425 "nvme_admin": false, 00:10:44.425 "nvme_io": false, 00:10:44.425 "nvme_io_md": false, 00:10:44.425 "write_zeroes": true, 00:10:44.425 "zcopy": true, 00:10:44.425 "get_zone_info": false, 00:10:44.425 "zone_management": false, 00:10:44.425 "zone_append": false, 00:10:44.425 "compare": false, 00:10:44.425 "compare_and_write": false, 00:10:44.425 "abort": true, 00:10:44.425 "seek_hole": false, 00:10:44.425 "seek_data": false, 00:10:44.425 "copy": true, 00:10:44.425 "nvme_iov_md": false 00:10:44.425 }, 00:10:44.425 "memory_domains": [ 00:10:44.425 { 00:10:44.425 "dma_device_id": "system", 00:10:44.425 "dma_device_type": 1 00:10:44.425 }, 00:10:44.425 { 00:10:44.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.425 "dma_device_type": 2 00:10:44.425 } 00:10:44.425 ], 00:10:44.425 "driver_specific": {} 00:10:44.425 } 00:10:44.425 ] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 BaseBdev4 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 [ 00:10:44.425 { 00:10:44.425 "name": "BaseBdev4", 00:10:44.425 "aliases": [ 00:10:44.425 "cd1fbc2a-fe02-453b-a7dd-8a457372b516" 00:10:44.425 ], 00:10:44.425 "product_name": "Malloc disk", 00:10:44.425 "block_size": 512, 00:10:44.425 "num_blocks": 65536, 00:10:44.425 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:44.425 "assigned_rate_limits": { 00:10:44.425 "rw_ios_per_sec": 0, 00:10:44.425 "rw_mbytes_per_sec": 0, 00:10:44.425 "r_mbytes_per_sec": 0, 00:10:44.425 "w_mbytes_per_sec": 0 00:10:44.425 }, 00:10:44.425 "claimed": false, 00:10:44.425 "zoned": false, 00:10:44.425 "supported_io_types": { 00:10:44.425 "read": true, 00:10:44.425 "write": true, 00:10:44.425 "unmap": true, 00:10:44.425 "flush": true, 00:10:44.425 "reset": true, 00:10:44.425 "nvme_admin": false, 00:10:44.425 "nvme_io": false, 00:10:44.425 "nvme_io_md": false, 00:10:44.425 "write_zeroes": true, 00:10:44.425 "zcopy": true, 00:10:44.425 "get_zone_info": false, 00:10:44.425 "zone_management": false, 00:10:44.425 "zone_append": false, 00:10:44.425 "compare": false, 00:10:44.425 "compare_and_write": false, 00:10:44.425 "abort": true, 00:10:44.425 "seek_hole": false, 00:10:44.425 "seek_data": false, 00:10:44.425 "copy": true, 00:10:44.425 "nvme_iov_md": false 00:10:44.425 }, 00:10:44.425 "memory_domains": [ 00:10:44.425 { 00:10:44.425 "dma_device_id": "system", 00:10:44.425 "dma_device_type": 1 00:10:44.425 }, 00:10:44.425 { 00:10:44.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.425 "dma_device_type": 2 00:10:44.425 } 00:10:44.425 ], 00:10:44.425 "driver_specific": {} 00:10:44.425 } 00:10:44.425 ] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 [2024-11-19 10:04:58.625364] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.425 [2024-11-19 10:04:58.625446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.425 [2024-11-19 10:04:58.625492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.425 [2024-11-19 10:04:58.628517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.425 [2024-11-19 10:04:58.628630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.425 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.684 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.684 "name": "Existed_Raid", 00:10:44.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.684 "strip_size_kb": 64, 00:10:44.684 "state": "configuring", 00:10:44.684 "raid_level": "raid0", 00:10:44.684 "superblock": false, 00:10:44.684 "num_base_bdevs": 4, 00:10:44.684 "num_base_bdevs_discovered": 3, 00:10:44.684 "num_base_bdevs_operational": 4, 00:10:44.684 "base_bdevs_list": [ 00:10:44.684 { 00:10:44.684 "name": "BaseBdev1", 00:10:44.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.684 "is_configured": false, 00:10:44.684 "data_offset": 0, 00:10:44.684 "data_size": 0 00:10:44.684 }, 00:10:44.684 { 00:10:44.684 "name": "BaseBdev2", 00:10:44.684 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:44.684 "is_configured": true, 00:10:44.684 "data_offset": 0, 00:10:44.684 "data_size": 65536 00:10:44.684 }, 00:10:44.684 { 00:10:44.684 "name": "BaseBdev3", 00:10:44.684 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:44.684 "is_configured": true, 00:10:44.684 "data_offset": 0, 00:10:44.684 "data_size": 65536 00:10:44.684 }, 00:10:44.684 { 00:10:44.684 "name": "BaseBdev4", 00:10:44.684 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:44.684 "is_configured": true, 00:10:44.684 "data_offset": 0, 00:10:44.684 "data_size": 65536 00:10:44.684 } 00:10:44.684 ] 00:10:44.684 }' 00:10:44.684 10:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.684 10:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.942 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.943 [2024-11-19 10:04:59.117510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.943 "name": "Existed_Raid", 00:10:44.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.943 "strip_size_kb": 64, 00:10:44.943 "state": "configuring", 00:10:44.943 "raid_level": "raid0", 00:10:44.943 "superblock": false, 00:10:44.943 "num_base_bdevs": 4, 00:10:44.943 "num_base_bdevs_discovered": 2, 00:10:44.943 "num_base_bdevs_operational": 4, 00:10:44.943 "base_bdevs_list": [ 00:10:44.943 { 00:10:44.943 "name": "BaseBdev1", 00:10:44.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.943 "is_configured": false, 00:10:44.943 "data_offset": 0, 00:10:44.943 "data_size": 0 00:10:44.943 }, 00:10:44.943 { 00:10:44.943 "name": null, 00:10:44.943 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:44.943 "is_configured": false, 00:10:44.943 "data_offset": 0, 00:10:44.943 "data_size": 65536 00:10:44.943 }, 00:10:44.943 { 00:10:44.943 "name": "BaseBdev3", 00:10:44.943 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:44.943 "is_configured": true, 00:10:44.943 "data_offset": 0, 00:10:44.943 "data_size": 65536 00:10:44.943 }, 00:10:44.943 { 00:10:44.943 "name": "BaseBdev4", 00:10:44.943 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:44.943 "is_configured": true, 00:10:44.943 "data_offset": 0, 00:10:44.943 "data_size": 65536 00:10:44.943 } 00:10:44.943 ] 00:10:44.943 }' 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.943 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.508 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.509 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.509 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.509 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.509 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.767 [2024-11-19 10:04:59.796261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.767 BaseBdev1 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.767 [ 00:10:45.767 { 00:10:45.767 "name": "BaseBdev1", 00:10:45.767 "aliases": [ 00:10:45.767 "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af" 00:10:45.767 ], 00:10:45.767 "product_name": "Malloc disk", 00:10:45.767 "block_size": 512, 00:10:45.767 "num_blocks": 65536, 00:10:45.767 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:45.767 "assigned_rate_limits": { 00:10:45.767 "rw_ios_per_sec": 0, 00:10:45.767 "rw_mbytes_per_sec": 0, 00:10:45.767 "r_mbytes_per_sec": 0, 00:10:45.767 "w_mbytes_per_sec": 0 00:10:45.767 }, 00:10:45.767 "claimed": true, 00:10:45.767 "claim_type": "exclusive_write", 00:10:45.767 "zoned": false, 00:10:45.767 "supported_io_types": { 00:10:45.767 "read": true, 00:10:45.767 "write": true, 00:10:45.767 "unmap": true, 00:10:45.767 "flush": true, 00:10:45.767 "reset": true, 00:10:45.767 "nvme_admin": false, 00:10:45.767 "nvme_io": false, 00:10:45.767 "nvme_io_md": false, 00:10:45.767 "write_zeroes": true, 00:10:45.767 "zcopy": true, 00:10:45.767 "get_zone_info": false, 00:10:45.767 "zone_management": false, 00:10:45.767 "zone_append": false, 00:10:45.767 "compare": false, 00:10:45.767 "compare_and_write": false, 00:10:45.767 "abort": true, 00:10:45.767 "seek_hole": false, 00:10:45.767 "seek_data": false, 00:10:45.767 "copy": true, 00:10:45.767 "nvme_iov_md": false 00:10:45.767 }, 00:10:45.767 "memory_domains": [ 00:10:45.767 { 00:10:45.767 "dma_device_id": "system", 00:10:45.767 "dma_device_type": 1 00:10:45.767 }, 00:10:45.767 { 00:10:45.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.767 "dma_device_type": 2 00:10:45.767 } 00:10:45.767 ], 00:10:45.767 "driver_specific": {} 00:10:45.767 } 00:10:45.767 ] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.767 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.768 "name": "Existed_Raid", 00:10:45.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.768 "strip_size_kb": 64, 00:10:45.768 "state": "configuring", 00:10:45.768 "raid_level": "raid0", 00:10:45.768 "superblock": false, 00:10:45.768 "num_base_bdevs": 4, 00:10:45.768 "num_base_bdevs_discovered": 3, 00:10:45.768 "num_base_bdevs_operational": 4, 00:10:45.768 "base_bdevs_list": [ 00:10:45.768 { 00:10:45.768 "name": "BaseBdev1", 00:10:45.768 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:45.768 "is_configured": true, 00:10:45.768 "data_offset": 0, 00:10:45.768 "data_size": 65536 00:10:45.768 }, 00:10:45.768 { 00:10:45.768 "name": null, 00:10:45.768 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:45.768 "is_configured": false, 00:10:45.768 "data_offset": 0, 00:10:45.768 "data_size": 65536 00:10:45.768 }, 00:10:45.768 { 00:10:45.768 "name": "BaseBdev3", 00:10:45.768 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:45.768 "is_configured": true, 00:10:45.768 "data_offset": 0, 00:10:45.768 "data_size": 65536 00:10:45.768 }, 00:10:45.768 { 00:10:45.768 "name": "BaseBdev4", 00:10:45.768 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:45.768 "is_configured": true, 00:10:45.768 "data_offset": 0, 00:10:45.768 "data_size": 65536 00:10:45.768 } 00:10:45.768 ] 00:10:45.768 }' 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.768 10:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.336 [2024-11-19 10:05:00.468612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.336 "name": "Existed_Raid", 00:10:46.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.336 "strip_size_kb": 64, 00:10:46.336 "state": "configuring", 00:10:46.336 "raid_level": "raid0", 00:10:46.336 "superblock": false, 00:10:46.336 "num_base_bdevs": 4, 00:10:46.336 "num_base_bdevs_discovered": 2, 00:10:46.336 "num_base_bdevs_operational": 4, 00:10:46.336 "base_bdevs_list": [ 00:10:46.336 { 00:10:46.336 "name": "BaseBdev1", 00:10:46.336 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:46.336 "is_configured": true, 00:10:46.336 "data_offset": 0, 00:10:46.336 "data_size": 65536 00:10:46.336 }, 00:10:46.336 { 00:10:46.336 "name": null, 00:10:46.336 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:46.336 "is_configured": false, 00:10:46.336 "data_offset": 0, 00:10:46.336 "data_size": 65536 00:10:46.336 }, 00:10:46.336 { 00:10:46.336 "name": null, 00:10:46.336 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:46.336 "is_configured": false, 00:10:46.336 "data_offset": 0, 00:10:46.336 "data_size": 65536 00:10:46.336 }, 00:10:46.336 { 00:10:46.336 "name": "BaseBdev4", 00:10:46.336 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:46.336 "is_configured": true, 00:10:46.336 "data_offset": 0, 00:10:46.336 "data_size": 65536 00:10:46.336 } 00:10:46.336 ] 00:10:46.336 }' 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.336 10:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.903 [2024-11-19 10:05:01.124757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.903 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.161 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.161 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.161 "name": "Existed_Raid", 00:10:47.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.161 "strip_size_kb": 64, 00:10:47.161 "state": "configuring", 00:10:47.161 "raid_level": "raid0", 00:10:47.161 "superblock": false, 00:10:47.161 "num_base_bdevs": 4, 00:10:47.161 "num_base_bdevs_discovered": 3, 00:10:47.161 "num_base_bdevs_operational": 4, 00:10:47.161 "base_bdevs_list": [ 00:10:47.161 { 00:10:47.161 "name": "BaseBdev1", 00:10:47.161 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:47.161 "is_configured": true, 00:10:47.161 "data_offset": 0, 00:10:47.161 "data_size": 65536 00:10:47.161 }, 00:10:47.161 { 00:10:47.161 "name": null, 00:10:47.161 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:47.161 "is_configured": false, 00:10:47.161 "data_offset": 0, 00:10:47.161 "data_size": 65536 00:10:47.161 }, 00:10:47.161 { 00:10:47.161 "name": "BaseBdev3", 00:10:47.161 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:47.161 "is_configured": true, 00:10:47.161 "data_offset": 0, 00:10:47.161 "data_size": 65536 00:10:47.161 }, 00:10:47.161 { 00:10:47.161 "name": "BaseBdev4", 00:10:47.161 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:47.161 "is_configured": true, 00:10:47.161 "data_offset": 0, 00:10:47.161 "data_size": 65536 00:10:47.161 } 00:10:47.161 ] 00:10:47.161 }' 00:10:47.161 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.161 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 [2024-11-19 10:05:01.765035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.728 "name": "Existed_Raid", 00:10:47.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.728 "strip_size_kb": 64, 00:10:47.728 "state": "configuring", 00:10:47.728 "raid_level": "raid0", 00:10:47.728 "superblock": false, 00:10:47.728 "num_base_bdevs": 4, 00:10:47.728 "num_base_bdevs_discovered": 2, 00:10:47.728 "num_base_bdevs_operational": 4, 00:10:47.728 "base_bdevs_list": [ 00:10:47.728 { 00:10:47.728 "name": null, 00:10:47.728 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:47.728 "is_configured": false, 00:10:47.728 "data_offset": 0, 00:10:47.728 "data_size": 65536 00:10:47.728 }, 00:10:47.728 { 00:10:47.728 "name": null, 00:10:47.728 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:47.728 "is_configured": false, 00:10:47.728 "data_offset": 0, 00:10:47.728 "data_size": 65536 00:10:47.728 }, 00:10:47.728 { 00:10:47.728 "name": "BaseBdev3", 00:10:47.728 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:47.728 "is_configured": true, 00:10:47.728 "data_offset": 0, 00:10:47.728 "data_size": 65536 00:10:47.728 }, 00:10:47.728 { 00:10:47.728 "name": "BaseBdev4", 00:10:47.728 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:47.728 "is_configured": true, 00:10:47.728 "data_offset": 0, 00:10:47.728 "data_size": 65536 00:10:47.728 } 00:10:47.728 ] 00:10:47.728 }' 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.728 10:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.294 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.294 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.294 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.294 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.294 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.295 [2024-11-19 10:05:02.484112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.295 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.562 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.562 "name": "Existed_Raid", 00:10:48.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.562 "strip_size_kb": 64, 00:10:48.562 "state": "configuring", 00:10:48.562 "raid_level": "raid0", 00:10:48.562 "superblock": false, 00:10:48.562 "num_base_bdevs": 4, 00:10:48.562 "num_base_bdevs_discovered": 3, 00:10:48.562 "num_base_bdevs_operational": 4, 00:10:48.562 "base_bdevs_list": [ 00:10:48.562 { 00:10:48.562 "name": null, 00:10:48.562 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:48.562 "is_configured": false, 00:10:48.562 "data_offset": 0, 00:10:48.562 "data_size": 65536 00:10:48.562 }, 00:10:48.562 { 00:10:48.562 "name": "BaseBdev2", 00:10:48.562 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:48.562 "is_configured": true, 00:10:48.562 "data_offset": 0, 00:10:48.562 "data_size": 65536 00:10:48.562 }, 00:10:48.562 { 00:10:48.562 "name": "BaseBdev3", 00:10:48.562 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:48.562 "is_configured": true, 00:10:48.562 "data_offset": 0, 00:10:48.562 "data_size": 65536 00:10:48.562 }, 00:10:48.562 { 00:10:48.562 "name": "BaseBdev4", 00:10:48.562 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:48.562 "is_configured": true, 00:10:48.562 "data_offset": 0, 00:10:48.562 "data_size": 65536 00:10:48.562 } 00:10:48.562 ] 00:10:48.562 }' 00:10:48.562 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.562 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.840 10:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.840 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.840 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bce89b09-5cc9-4d95-a5fa-2da53f8fa2af 00:10:48.840 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.840 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.840 [2024-11-19 10:05:03.069414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.840 [2024-11-19 10:05:03.069517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.840 [2024-11-19 10:05:03.069530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:48.840 [2024-11-19 10:05:03.069921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:48.840 [2024-11-19 10:05:03.070144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.840 [2024-11-19 10:05:03.070167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:48.840 [2024-11-19 10:05:03.070506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.840 NewBaseBdev 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.099 [ 00:10:49.099 { 00:10:49.099 "name": "NewBaseBdev", 00:10:49.099 "aliases": [ 00:10:49.099 "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af" 00:10:49.099 ], 00:10:49.099 "product_name": "Malloc disk", 00:10:49.099 "block_size": 512, 00:10:49.099 "num_blocks": 65536, 00:10:49.099 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:49.099 "assigned_rate_limits": { 00:10:49.099 "rw_ios_per_sec": 0, 00:10:49.099 "rw_mbytes_per_sec": 0, 00:10:49.099 "r_mbytes_per_sec": 0, 00:10:49.099 "w_mbytes_per_sec": 0 00:10:49.099 }, 00:10:49.099 "claimed": true, 00:10:49.099 "claim_type": "exclusive_write", 00:10:49.099 "zoned": false, 00:10:49.099 "supported_io_types": { 00:10:49.099 "read": true, 00:10:49.099 "write": true, 00:10:49.099 "unmap": true, 00:10:49.099 "flush": true, 00:10:49.099 "reset": true, 00:10:49.099 "nvme_admin": false, 00:10:49.099 "nvme_io": false, 00:10:49.099 "nvme_io_md": false, 00:10:49.099 "write_zeroes": true, 00:10:49.099 "zcopy": true, 00:10:49.099 "get_zone_info": false, 00:10:49.099 "zone_management": false, 00:10:49.099 "zone_append": false, 00:10:49.099 "compare": false, 00:10:49.099 "compare_and_write": false, 00:10:49.099 "abort": true, 00:10:49.099 "seek_hole": false, 00:10:49.099 "seek_data": false, 00:10:49.099 "copy": true, 00:10:49.099 "nvme_iov_md": false 00:10:49.099 }, 00:10:49.099 "memory_domains": [ 00:10:49.099 { 00:10:49.099 "dma_device_id": "system", 00:10:49.099 "dma_device_type": 1 00:10:49.099 }, 00:10:49.099 { 00:10:49.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.099 "dma_device_type": 2 00:10:49.099 } 00:10:49.099 ], 00:10:49.099 "driver_specific": {} 00:10:49.099 } 00:10:49.099 ] 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.099 "name": "Existed_Raid", 00:10:49.099 "uuid": "4898e558-ac8c-4065-a420-a6a5ce94b7e5", 00:10:49.099 "strip_size_kb": 64, 00:10:49.099 "state": "online", 00:10:49.099 "raid_level": "raid0", 00:10:49.099 "superblock": false, 00:10:49.099 "num_base_bdevs": 4, 00:10:49.099 "num_base_bdevs_discovered": 4, 00:10:49.099 "num_base_bdevs_operational": 4, 00:10:49.099 "base_bdevs_list": [ 00:10:49.099 { 00:10:49.099 "name": "NewBaseBdev", 00:10:49.099 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:49.099 "is_configured": true, 00:10:49.099 "data_offset": 0, 00:10:49.099 "data_size": 65536 00:10:49.099 }, 00:10:49.099 { 00:10:49.099 "name": "BaseBdev2", 00:10:49.099 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:49.099 "is_configured": true, 00:10:49.099 "data_offset": 0, 00:10:49.099 "data_size": 65536 00:10:49.099 }, 00:10:49.099 { 00:10:49.099 "name": "BaseBdev3", 00:10:49.099 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:49.099 "is_configured": true, 00:10:49.099 "data_offset": 0, 00:10:49.099 "data_size": 65536 00:10:49.099 }, 00:10:49.099 { 00:10:49.099 "name": "BaseBdev4", 00:10:49.099 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:49.099 "is_configured": true, 00:10:49.099 "data_offset": 0, 00:10:49.099 "data_size": 65536 00:10:49.099 } 00:10:49.099 ] 00:10:49.099 }' 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.099 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.358 [2024-11-19 10:05:03.566168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.358 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.617 "name": "Existed_Raid", 00:10:49.617 "aliases": [ 00:10:49.617 "4898e558-ac8c-4065-a420-a6a5ce94b7e5" 00:10:49.617 ], 00:10:49.617 "product_name": "Raid Volume", 00:10:49.617 "block_size": 512, 00:10:49.617 "num_blocks": 262144, 00:10:49.617 "uuid": "4898e558-ac8c-4065-a420-a6a5ce94b7e5", 00:10:49.617 "assigned_rate_limits": { 00:10:49.617 "rw_ios_per_sec": 0, 00:10:49.617 "rw_mbytes_per_sec": 0, 00:10:49.617 "r_mbytes_per_sec": 0, 00:10:49.617 "w_mbytes_per_sec": 0 00:10:49.617 }, 00:10:49.617 "claimed": false, 00:10:49.617 "zoned": false, 00:10:49.617 "supported_io_types": { 00:10:49.617 "read": true, 00:10:49.617 "write": true, 00:10:49.617 "unmap": true, 00:10:49.617 "flush": true, 00:10:49.617 "reset": true, 00:10:49.617 "nvme_admin": false, 00:10:49.617 "nvme_io": false, 00:10:49.617 "nvme_io_md": false, 00:10:49.617 "write_zeroes": true, 00:10:49.617 "zcopy": false, 00:10:49.617 "get_zone_info": false, 00:10:49.617 "zone_management": false, 00:10:49.617 "zone_append": false, 00:10:49.617 "compare": false, 00:10:49.617 "compare_and_write": false, 00:10:49.617 "abort": false, 00:10:49.617 "seek_hole": false, 00:10:49.617 "seek_data": false, 00:10:49.617 "copy": false, 00:10:49.617 "nvme_iov_md": false 00:10:49.617 }, 00:10:49.617 "memory_domains": [ 00:10:49.617 { 00:10:49.617 "dma_device_id": "system", 00:10:49.617 "dma_device_type": 1 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.617 "dma_device_type": 2 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "system", 00:10:49.617 "dma_device_type": 1 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.617 "dma_device_type": 2 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "system", 00:10:49.617 "dma_device_type": 1 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.617 "dma_device_type": 2 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "system", 00:10:49.617 "dma_device_type": 1 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.617 "dma_device_type": 2 00:10:49.617 } 00:10:49.617 ], 00:10:49.617 "driver_specific": { 00:10:49.617 "raid": { 00:10:49.617 "uuid": "4898e558-ac8c-4065-a420-a6a5ce94b7e5", 00:10:49.617 "strip_size_kb": 64, 00:10:49.617 "state": "online", 00:10:49.617 "raid_level": "raid0", 00:10:49.617 "superblock": false, 00:10:49.617 "num_base_bdevs": 4, 00:10:49.617 "num_base_bdevs_discovered": 4, 00:10:49.617 "num_base_bdevs_operational": 4, 00:10:49.617 "base_bdevs_list": [ 00:10:49.617 { 00:10:49.617 "name": "NewBaseBdev", 00:10:49.617 "uuid": "bce89b09-5cc9-4d95-a5fa-2da53f8fa2af", 00:10:49.617 "is_configured": true, 00:10:49.617 "data_offset": 0, 00:10:49.617 "data_size": 65536 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "name": "BaseBdev2", 00:10:49.617 "uuid": "e64d7aa6-ea9f-4b67-8b6f-b622afe6d6bc", 00:10:49.617 "is_configured": true, 00:10:49.617 "data_offset": 0, 00:10:49.617 "data_size": 65536 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "name": "BaseBdev3", 00:10:49.617 "uuid": "80d547c8-2cb7-4975-98f3-b7da6d95ed4b", 00:10:49.617 "is_configured": true, 00:10:49.617 "data_offset": 0, 00:10:49.617 "data_size": 65536 00:10:49.617 }, 00:10:49.617 { 00:10:49.617 "name": "BaseBdev4", 00:10:49.617 "uuid": "cd1fbc2a-fe02-453b-a7dd-8a457372b516", 00:10:49.617 "is_configured": true, 00:10:49.617 "data_offset": 0, 00:10:49.617 "data_size": 65536 00:10:49.617 } 00:10:49.617 ] 00:10:49.617 } 00:10:49.617 } 00:10:49.617 }' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:49.617 BaseBdev2 00:10:49.617 BaseBdev3 00:10:49.617 BaseBdev4' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.617 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.618 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.877 [2024-11-19 10:05:03.945757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.877 [2024-11-19 10:05:03.945822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.877 [2024-11-19 10:05:03.945949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.877 [2024-11-19 10:05:03.946058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.877 [2024-11-19 10:05:03.946076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69335 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69335 ']' 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69335 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69335 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.877 killing process with pid 69335 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69335' 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69335 00:10:49.877 [2024-11-19 10:05:03.989504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.877 10:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69335 00:10:50.445 [2024-11-19 10:05:04.377608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.385 ************************************ 00:10:51.385 END TEST raid_state_function_test 00:10:51.385 ************************************ 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:51.385 00:10:51.385 real 0m13.221s 00:10:51.385 user 0m21.685s 00:10:51.385 sys 0m1.856s 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.385 10:05:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:51.385 10:05:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:51.385 10:05:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.385 10:05:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.385 ************************************ 00:10:51.385 START TEST raid_state_function_test_sb 00:10:51.385 ************************************ 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:51.385 Process raid pid: 70022 00:10:51.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70022 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70022' 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70022 00:10:51.385 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70022 ']' 00:10:51.386 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.386 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.386 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.386 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.386 10:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.644 [2024-11-19 10:05:05.674328] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:10:51.644 [2024-11-19 10:05:05.674537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.644 [2024-11-19 10:05:05.870909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.902 [2024-11-19 10:05:06.045295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.159 [2024-11-19 10:05:06.277723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.159 [2024-11-19 10:05:06.277816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.741 [2024-11-19 10:05:06.700349] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.741 [2024-11-19 10:05:06.700425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.741 [2024-11-19 10:05:06.700444] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.741 [2024-11-19 10:05:06.700460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.741 [2024-11-19 10:05:06.700470] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.741 [2024-11-19 10:05:06.700485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.741 [2024-11-19 10:05:06.700495] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.741 [2024-11-19 10:05:06.700510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.741 "name": "Existed_Raid", 00:10:52.741 "uuid": "758ed61b-16fe-40e4-9883-ceb0d3d4e234", 00:10:52.741 "strip_size_kb": 64, 00:10:52.741 "state": "configuring", 00:10:52.741 "raid_level": "raid0", 00:10:52.741 "superblock": true, 00:10:52.741 "num_base_bdevs": 4, 00:10:52.741 "num_base_bdevs_discovered": 0, 00:10:52.741 "num_base_bdevs_operational": 4, 00:10:52.741 "base_bdevs_list": [ 00:10:52.741 { 00:10:52.741 "name": "BaseBdev1", 00:10:52.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.741 "is_configured": false, 00:10:52.741 "data_offset": 0, 00:10:52.741 "data_size": 0 00:10:52.741 }, 00:10:52.741 { 00:10:52.741 "name": "BaseBdev2", 00:10:52.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.741 "is_configured": false, 00:10:52.741 "data_offset": 0, 00:10:52.741 "data_size": 0 00:10:52.741 }, 00:10:52.741 { 00:10:52.741 "name": "BaseBdev3", 00:10:52.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.741 "is_configured": false, 00:10:52.741 "data_offset": 0, 00:10:52.741 "data_size": 0 00:10:52.741 }, 00:10:52.741 { 00:10:52.741 "name": "BaseBdev4", 00:10:52.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.741 "is_configured": false, 00:10:52.741 "data_offset": 0, 00:10:52.741 "data_size": 0 00:10:52.741 } 00:10:52.741 ] 00:10:52.741 }' 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.741 10:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 [2024-11-19 10:05:07.200390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.000 [2024-11-19 10:05:07.200448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 [2024-11-19 10:05:07.208402] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.000 [2024-11-19 10:05:07.208474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.000 [2024-11-19 10:05:07.208501] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.000 [2024-11-19 10:05:07.208525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.000 [2024-11-19 10:05:07.208537] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.000 [2024-11-19 10:05:07.208552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.000 [2024-11-19 10:05:07.208562] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.000 [2024-11-19 10:05:07.208576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.000 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 [2024-11-19 10:05:07.257361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.256 BaseBdev1 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.257 [ 00:10:53.257 { 00:10:53.257 "name": "BaseBdev1", 00:10:53.257 "aliases": [ 00:10:53.257 "3e78689e-0143-4fff-b3e3-8c1e7169f7c5" 00:10:53.257 ], 00:10:53.257 "product_name": "Malloc disk", 00:10:53.257 "block_size": 512, 00:10:53.257 "num_blocks": 65536, 00:10:53.257 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:53.257 "assigned_rate_limits": { 00:10:53.257 "rw_ios_per_sec": 0, 00:10:53.257 "rw_mbytes_per_sec": 0, 00:10:53.257 "r_mbytes_per_sec": 0, 00:10:53.257 "w_mbytes_per_sec": 0 00:10:53.257 }, 00:10:53.257 "claimed": true, 00:10:53.257 "claim_type": "exclusive_write", 00:10:53.257 "zoned": false, 00:10:53.257 "supported_io_types": { 00:10:53.257 "read": true, 00:10:53.257 "write": true, 00:10:53.257 "unmap": true, 00:10:53.257 "flush": true, 00:10:53.257 "reset": true, 00:10:53.257 "nvme_admin": false, 00:10:53.257 "nvme_io": false, 00:10:53.257 "nvme_io_md": false, 00:10:53.257 "write_zeroes": true, 00:10:53.257 "zcopy": true, 00:10:53.257 "get_zone_info": false, 00:10:53.257 "zone_management": false, 00:10:53.257 "zone_append": false, 00:10:53.257 "compare": false, 00:10:53.257 "compare_and_write": false, 00:10:53.257 "abort": true, 00:10:53.257 "seek_hole": false, 00:10:53.257 "seek_data": false, 00:10:53.257 "copy": true, 00:10:53.257 "nvme_iov_md": false 00:10:53.257 }, 00:10:53.257 "memory_domains": [ 00:10:53.257 { 00:10:53.257 "dma_device_id": "system", 00:10:53.257 "dma_device_type": 1 00:10:53.257 }, 00:10:53.257 { 00:10:53.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.257 "dma_device_type": 2 00:10:53.257 } 00:10:53.257 ], 00:10:53.257 "driver_specific": {} 00:10:53.257 } 00:10:53.257 ] 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.257 "name": "Existed_Raid", 00:10:53.257 "uuid": "5f8919c9-c314-4d21-9485-40c78708cc68", 00:10:53.257 "strip_size_kb": 64, 00:10:53.257 "state": "configuring", 00:10:53.257 "raid_level": "raid0", 00:10:53.257 "superblock": true, 00:10:53.257 "num_base_bdevs": 4, 00:10:53.257 "num_base_bdevs_discovered": 1, 00:10:53.257 "num_base_bdevs_operational": 4, 00:10:53.257 "base_bdevs_list": [ 00:10:53.257 { 00:10:53.257 "name": "BaseBdev1", 00:10:53.257 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:53.257 "is_configured": true, 00:10:53.257 "data_offset": 2048, 00:10:53.257 "data_size": 63488 00:10:53.257 }, 00:10:53.257 { 00:10:53.257 "name": "BaseBdev2", 00:10:53.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.257 "is_configured": false, 00:10:53.257 "data_offset": 0, 00:10:53.257 "data_size": 0 00:10:53.257 }, 00:10:53.257 { 00:10:53.257 "name": "BaseBdev3", 00:10:53.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.257 "is_configured": false, 00:10:53.257 "data_offset": 0, 00:10:53.257 "data_size": 0 00:10:53.257 }, 00:10:53.257 { 00:10:53.257 "name": "BaseBdev4", 00:10:53.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.257 "is_configured": false, 00:10:53.257 "data_offset": 0, 00:10:53.257 "data_size": 0 00:10:53.257 } 00:10:53.257 ] 00:10:53.257 }' 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.257 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.821 [2024-11-19 10:05:07.809577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.821 [2024-11-19 10:05:07.809656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.821 [2024-11-19 10:05:07.817663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.821 [2024-11-19 10:05:07.820342] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.821 [2024-11-19 10:05:07.820401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.821 [2024-11-19 10:05:07.820419] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.821 [2024-11-19 10:05:07.820437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.821 [2024-11-19 10:05:07.820447] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.821 [2024-11-19 10:05:07.820462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.821 "name": "Existed_Raid", 00:10:53.821 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:53.821 "strip_size_kb": 64, 00:10:53.821 "state": "configuring", 00:10:53.821 "raid_level": "raid0", 00:10:53.821 "superblock": true, 00:10:53.821 "num_base_bdevs": 4, 00:10:53.821 "num_base_bdevs_discovered": 1, 00:10:53.821 "num_base_bdevs_operational": 4, 00:10:53.821 "base_bdevs_list": [ 00:10:53.821 { 00:10:53.821 "name": "BaseBdev1", 00:10:53.821 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:53.821 "is_configured": true, 00:10:53.821 "data_offset": 2048, 00:10:53.821 "data_size": 63488 00:10:53.821 }, 00:10:53.821 { 00:10:53.821 "name": "BaseBdev2", 00:10:53.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.821 "is_configured": false, 00:10:53.821 "data_offset": 0, 00:10:53.821 "data_size": 0 00:10:53.821 }, 00:10:53.821 { 00:10:53.821 "name": "BaseBdev3", 00:10:53.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.821 "is_configured": false, 00:10:53.821 "data_offset": 0, 00:10:53.821 "data_size": 0 00:10:53.821 }, 00:10:53.821 { 00:10:53.821 "name": "BaseBdev4", 00:10:53.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.821 "is_configured": false, 00:10:53.821 "data_offset": 0, 00:10:53.821 "data_size": 0 00:10:53.821 } 00:10:53.821 ] 00:10:53.821 }' 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.821 10:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.387 [2024-11-19 10:05:08.407999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.387 BaseBdev2 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.387 [ 00:10:54.387 { 00:10:54.387 "name": "BaseBdev2", 00:10:54.387 "aliases": [ 00:10:54.387 "4a375984-a245-491a-a386-e0d7a015716c" 00:10:54.387 ], 00:10:54.387 "product_name": "Malloc disk", 00:10:54.387 "block_size": 512, 00:10:54.387 "num_blocks": 65536, 00:10:54.387 "uuid": "4a375984-a245-491a-a386-e0d7a015716c", 00:10:54.387 "assigned_rate_limits": { 00:10:54.387 "rw_ios_per_sec": 0, 00:10:54.387 "rw_mbytes_per_sec": 0, 00:10:54.387 "r_mbytes_per_sec": 0, 00:10:54.387 "w_mbytes_per_sec": 0 00:10:54.387 }, 00:10:54.387 "claimed": true, 00:10:54.387 "claim_type": "exclusive_write", 00:10:54.387 "zoned": false, 00:10:54.387 "supported_io_types": { 00:10:54.387 "read": true, 00:10:54.387 "write": true, 00:10:54.387 "unmap": true, 00:10:54.387 "flush": true, 00:10:54.387 "reset": true, 00:10:54.387 "nvme_admin": false, 00:10:54.387 "nvme_io": false, 00:10:54.387 "nvme_io_md": false, 00:10:54.387 "write_zeroes": true, 00:10:54.387 "zcopy": true, 00:10:54.387 "get_zone_info": false, 00:10:54.387 "zone_management": false, 00:10:54.387 "zone_append": false, 00:10:54.387 "compare": false, 00:10:54.387 "compare_and_write": false, 00:10:54.387 "abort": true, 00:10:54.387 "seek_hole": false, 00:10:54.387 "seek_data": false, 00:10:54.387 "copy": true, 00:10:54.387 "nvme_iov_md": false 00:10:54.387 }, 00:10:54.387 "memory_domains": [ 00:10:54.387 { 00:10:54.387 "dma_device_id": "system", 00:10:54.387 "dma_device_type": 1 00:10:54.387 }, 00:10:54.387 { 00:10:54.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.387 "dma_device_type": 2 00:10:54.387 } 00:10:54.387 ], 00:10:54.387 "driver_specific": {} 00:10:54.387 } 00:10:54.387 ] 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.387 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.387 "name": "Existed_Raid", 00:10:54.387 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:54.387 "strip_size_kb": 64, 00:10:54.387 "state": "configuring", 00:10:54.387 "raid_level": "raid0", 00:10:54.388 "superblock": true, 00:10:54.388 "num_base_bdevs": 4, 00:10:54.388 "num_base_bdevs_discovered": 2, 00:10:54.388 "num_base_bdevs_operational": 4, 00:10:54.388 "base_bdevs_list": [ 00:10:54.388 { 00:10:54.388 "name": "BaseBdev1", 00:10:54.388 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:54.388 "is_configured": true, 00:10:54.388 "data_offset": 2048, 00:10:54.388 "data_size": 63488 00:10:54.388 }, 00:10:54.388 { 00:10:54.388 "name": "BaseBdev2", 00:10:54.388 "uuid": "4a375984-a245-491a-a386-e0d7a015716c", 00:10:54.388 "is_configured": true, 00:10:54.388 "data_offset": 2048, 00:10:54.388 "data_size": 63488 00:10:54.388 }, 00:10:54.388 { 00:10:54.388 "name": "BaseBdev3", 00:10:54.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.388 "is_configured": false, 00:10:54.388 "data_offset": 0, 00:10:54.388 "data_size": 0 00:10:54.388 }, 00:10:54.388 { 00:10:54.388 "name": "BaseBdev4", 00:10:54.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.388 "is_configured": false, 00:10:54.388 "data_offset": 0, 00:10:54.388 "data_size": 0 00:10:54.388 } 00:10:54.388 ] 00:10:54.388 }' 00:10:54.388 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.388 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.956 10:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.956 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.956 10:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.956 [2024-11-19 10:05:09.032526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.956 BaseBdev3 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.956 [ 00:10:54.956 { 00:10:54.956 "name": "BaseBdev3", 00:10:54.956 "aliases": [ 00:10:54.956 "39939334-bf1f-4edd-8045-90f2bf5da628" 00:10:54.956 ], 00:10:54.956 "product_name": "Malloc disk", 00:10:54.956 "block_size": 512, 00:10:54.956 "num_blocks": 65536, 00:10:54.956 "uuid": "39939334-bf1f-4edd-8045-90f2bf5da628", 00:10:54.956 "assigned_rate_limits": { 00:10:54.956 "rw_ios_per_sec": 0, 00:10:54.956 "rw_mbytes_per_sec": 0, 00:10:54.956 "r_mbytes_per_sec": 0, 00:10:54.956 "w_mbytes_per_sec": 0 00:10:54.956 }, 00:10:54.956 "claimed": true, 00:10:54.956 "claim_type": "exclusive_write", 00:10:54.956 "zoned": false, 00:10:54.956 "supported_io_types": { 00:10:54.956 "read": true, 00:10:54.956 "write": true, 00:10:54.956 "unmap": true, 00:10:54.956 "flush": true, 00:10:54.956 "reset": true, 00:10:54.956 "nvme_admin": false, 00:10:54.956 "nvme_io": false, 00:10:54.956 "nvme_io_md": false, 00:10:54.956 "write_zeroes": true, 00:10:54.956 "zcopy": true, 00:10:54.956 "get_zone_info": false, 00:10:54.956 "zone_management": false, 00:10:54.956 "zone_append": false, 00:10:54.956 "compare": false, 00:10:54.956 "compare_and_write": false, 00:10:54.956 "abort": true, 00:10:54.956 "seek_hole": false, 00:10:54.956 "seek_data": false, 00:10:54.956 "copy": true, 00:10:54.956 "nvme_iov_md": false 00:10:54.956 }, 00:10:54.956 "memory_domains": [ 00:10:54.956 { 00:10:54.956 "dma_device_id": "system", 00:10:54.956 "dma_device_type": 1 00:10:54.956 }, 00:10:54.956 { 00:10:54.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.956 "dma_device_type": 2 00:10:54.956 } 00:10:54.956 ], 00:10:54.956 "driver_specific": {} 00:10:54.956 } 00:10:54.956 ] 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.956 "name": "Existed_Raid", 00:10:54.956 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:54.956 "strip_size_kb": 64, 00:10:54.956 "state": "configuring", 00:10:54.956 "raid_level": "raid0", 00:10:54.956 "superblock": true, 00:10:54.956 "num_base_bdevs": 4, 00:10:54.956 "num_base_bdevs_discovered": 3, 00:10:54.956 "num_base_bdevs_operational": 4, 00:10:54.956 "base_bdevs_list": [ 00:10:54.956 { 00:10:54.956 "name": "BaseBdev1", 00:10:54.956 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:54.956 "is_configured": true, 00:10:54.956 "data_offset": 2048, 00:10:54.956 "data_size": 63488 00:10:54.956 }, 00:10:54.956 { 00:10:54.956 "name": "BaseBdev2", 00:10:54.956 "uuid": "4a375984-a245-491a-a386-e0d7a015716c", 00:10:54.956 "is_configured": true, 00:10:54.956 "data_offset": 2048, 00:10:54.956 "data_size": 63488 00:10:54.956 }, 00:10:54.956 { 00:10:54.956 "name": "BaseBdev3", 00:10:54.956 "uuid": "39939334-bf1f-4edd-8045-90f2bf5da628", 00:10:54.956 "is_configured": true, 00:10:54.956 "data_offset": 2048, 00:10:54.956 "data_size": 63488 00:10:54.956 }, 00:10:54.956 { 00:10:54.956 "name": "BaseBdev4", 00:10:54.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.956 "is_configured": false, 00:10:54.956 "data_offset": 0, 00:10:54.956 "data_size": 0 00:10:54.956 } 00:10:54.956 ] 00:10:54.956 }' 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.956 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 [2024-11-19 10:05:09.586834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.525 [2024-11-19 10:05:09.587496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.525 [2024-11-19 10:05:09.587524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.525 BaseBdev4 00:10:55.525 [2024-11-19 10:05:09.587916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.525 [2024-11-19 10:05:09.588126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.525 [2024-11-19 10:05:09.588156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.525 [2024-11-19 10:05:09.588344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 [ 00:10:55.525 { 00:10:55.525 "name": "BaseBdev4", 00:10:55.525 "aliases": [ 00:10:55.525 "ccd146eb-348e-4f87-9cee-819f3c74017f" 00:10:55.525 ], 00:10:55.525 "product_name": "Malloc disk", 00:10:55.525 "block_size": 512, 00:10:55.525 "num_blocks": 65536, 00:10:55.525 "uuid": "ccd146eb-348e-4f87-9cee-819f3c74017f", 00:10:55.525 "assigned_rate_limits": { 00:10:55.525 "rw_ios_per_sec": 0, 00:10:55.525 "rw_mbytes_per_sec": 0, 00:10:55.525 "r_mbytes_per_sec": 0, 00:10:55.525 "w_mbytes_per_sec": 0 00:10:55.525 }, 00:10:55.525 "claimed": true, 00:10:55.525 "claim_type": "exclusive_write", 00:10:55.525 "zoned": false, 00:10:55.525 "supported_io_types": { 00:10:55.525 "read": true, 00:10:55.525 "write": true, 00:10:55.525 "unmap": true, 00:10:55.525 "flush": true, 00:10:55.525 "reset": true, 00:10:55.525 "nvme_admin": false, 00:10:55.525 "nvme_io": false, 00:10:55.525 "nvme_io_md": false, 00:10:55.525 "write_zeroes": true, 00:10:55.525 "zcopy": true, 00:10:55.525 "get_zone_info": false, 00:10:55.525 "zone_management": false, 00:10:55.525 "zone_append": false, 00:10:55.525 "compare": false, 00:10:55.525 "compare_and_write": false, 00:10:55.525 "abort": true, 00:10:55.525 "seek_hole": false, 00:10:55.525 "seek_data": false, 00:10:55.525 "copy": true, 00:10:55.525 "nvme_iov_md": false 00:10:55.525 }, 00:10:55.525 "memory_domains": [ 00:10:55.525 { 00:10:55.525 "dma_device_id": "system", 00:10:55.525 "dma_device_type": 1 00:10:55.525 }, 00:10:55.525 { 00:10:55.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.525 "dma_device_type": 2 00:10:55.525 } 00:10:55.525 ], 00:10:55.525 "driver_specific": {} 00:10:55.525 } 00:10:55.525 ] 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.525 "name": "Existed_Raid", 00:10:55.525 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:55.525 "strip_size_kb": 64, 00:10:55.525 "state": "online", 00:10:55.525 "raid_level": "raid0", 00:10:55.525 "superblock": true, 00:10:55.525 "num_base_bdevs": 4, 00:10:55.525 "num_base_bdevs_discovered": 4, 00:10:55.525 "num_base_bdevs_operational": 4, 00:10:55.525 "base_bdevs_list": [ 00:10:55.525 { 00:10:55.525 "name": "BaseBdev1", 00:10:55.525 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:55.525 "is_configured": true, 00:10:55.525 "data_offset": 2048, 00:10:55.525 "data_size": 63488 00:10:55.525 }, 00:10:55.525 { 00:10:55.525 "name": "BaseBdev2", 00:10:55.525 "uuid": "4a375984-a245-491a-a386-e0d7a015716c", 00:10:55.525 "is_configured": true, 00:10:55.525 "data_offset": 2048, 00:10:55.525 "data_size": 63488 00:10:55.525 }, 00:10:55.525 { 00:10:55.525 "name": "BaseBdev3", 00:10:55.525 "uuid": "39939334-bf1f-4edd-8045-90f2bf5da628", 00:10:55.525 "is_configured": true, 00:10:55.525 "data_offset": 2048, 00:10:55.525 "data_size": 63488 00:10:55.525 }, 00:10:55.525 { 00:10:55.525 "name": "BaseBdev4", 00:10:55.525 "uuid": "ccd146eb-348e-4f87-9cee-819f3c74017f", 00:10:55.525 "is_configured": true, 00:10:55.525 "data_offset": 2048, 00:10:55.525 "data_size": 63488 00:10:55.525 } 00:10:55.525 ] 00:10:55.525 }' 00:10:55.526 10:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.526 10:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 [2024-11-19 10:05:10.131502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.094 "name": "Existed_Raid", 00:10:56.094 "aliases": [ 00:10:56.094 "d0ab32e0-2e67-496a-9a40-f8404a2f6d08" 00:10:56.094 ], 00:10:56.094 "product_name": "Raid Volume", 00:10:56.094 "block_size": 512, 00:10:56.094 "num_blocks": 253952, 00:10:56.094 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:56.094 "assigned_rate_limits": { 00:10:56.094 "rw_ios_per_sec": 0, 00:10:56.094 "rw_mbytes_per_sec": 0, 00:10:56.094 "r_mbytes_per_sec": 0, 00:10:56.094 "w_mbytes_per_sec": 0 00:10:56.094 }, 00:10:56.094 "claimed": false, 00:10:56.094 "zoned": false, 00:10:56.094 "supported_io_types": { 00:10:56.094 "read": true, 00:10:56.094 "write": true, 00:10:56.094 "unmap": true, 00:10:56.094 "flush": true, 00:10:56.094 "reset": true, 00:10:56.094 "nvme_admin": false, 00:10:56.094 "nvme_io": false, 00:10:56.094 "nvme_io_md": false, 00:10:56.094 "write_zeroes": true, 00:10:56.094 "zcopy": false, 00:10:56.094 "get_zone_info": false, 00:10:56.094 "zone_management": false, 00:10:56.094 "zone_append": false, 00:10:56.094 "compare": false, 00:10:56.094 "compare_and_write": false, 00:10:56.094 "abort": false, 00:10:56.094 "seek_hole": false, 00:10:56.094 "seek_data": false, 00:10:56.094 "copy": false, 00:10:56.094 "nvme_iov_md": false 00:10:56.094 }, 00:10:56.094 "memory_domains": [ 00:10:56.094 { 00:10:56.094 "dma_device_id": "system", 00:10:56.094 "dma_device_type": 1 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.094 "dma_device_type": 2 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "system", 00:10:56.094 "dma_device_type": 1 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.094 "dma_device_type": 2 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "system", 00:10:56.094 "dma_device_type": 1 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.094 "dma_device_type": 2 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "system", 00:10:56.094 "dma_device_type": 1 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.094 "dma_device_type": 2 00:10:56.094 } 00:10:56.094 ], 00:10:56.094 "driver_specific": { 00:10:56.094 "raid": { 00:10:56.094 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:56.094 "strip_size_kb": 64, 00:10:56.094 "state": "online", 00:10:56.094 "raid_level": "raid0", 00:10:56.094 "superblock": true, 00:10:56.094 "num_base_bdevs": 4, 00:10:56.094 "num_base_bdevs_discovered": 4, 00:10:56.094 "num_base_bdevs_operational": 4, 00:10:56.094 "base_bdevs_list": [ 00:10:56.094 { 00:10:56.094 "name": "BaseBdev1", 00:10:56.094 "uuid": "3e78689e-0143-4fff-b3e3-8c1e7169f7c5", 00:10:56.094 "is_configured": true, 00:10:56.094 "data_offset": 2048, 00:10:56.094 "data_size": 63488 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "name": "BaseBdev2", 00:10:56.094 "uuid": "4a375984-a245-491a-a386-e0d7a015716c", 00:10:56.094 "is_configured": true, 00:10:56.094 "data_offset": 2048, 00:10:56.094 "data_size": 63488 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "name": "BaseBdev3", 00:10:56.094 "uuid": "39939334-bf1f-4edd-8045-90f2bf5da628", 00:10:56.094 "is_configured": true, 00:10:56.094 "data_offset": 2048, 00:10:56.094 "data_size": 63488 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "name": "BaseBdev4", 00:10:56.094 "uuid": "ccd146eb-348e-4f87-9cee-819f3c74017f", 00:10:56.094 "is_configured": true, 00:10:56.094 "data_offset": 2048, 00:10:56.094 "data_size": 63488 00:10:56.094 } 00:10:56.094 ] 00:10:56.094 } 00:10:56.094 } 00:10:56.094 }' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.094 BaseBdev2 00:10:56.094 BaseBdev3 00:10:56.094 BaseBdev4' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.354 [2024-11-19 10:05:10.475254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.354 [2024-11-19 10:05:10.475298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.354 [2024-11-19 10:05:10.475375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.354 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.613 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.613 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.613 "name": "Existed_Raid", 00:10:56.613 "uuid": "d0ab32e0-2e67-496a-9a40-f8404a2f6d08", 00:10:56.614 "strip_size_kb": 64, 00:10:56.614 "state": "offline", 00:10:56.614 "raid_level": "raid0", 00:10:56.614 "superblock": true, 00:10:56.614 "num_base_bdevs": 4, 00:10:56.614 "num_base_bdevs_discovered": 3, 00:10:56.614 "num_base_bdevs_operational": 3, 00:10:56.614 "base_bdevs_list": [ 00:10:56.614 { 00:10:56.614 "name": null, 00:10:56.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.614 "is_configured": false, 00:10:56.614 "data_offset": 0, 00:10:56.614 "data_size": 63488 00:10:56.614 }, 00:10:56.614 { 00:10:56.614 "name": "BaseBdev2", 00:10:56.614 "uuid": "4a375984-a245-491a-a386-e0d7a015716c", 00:10:56.614 "is_configured": true, 00:10:56.614 "data_offset": 2048, 00:10:56.614 "data_size": 63488 00:10:56.614 }, 00:10:56.614 { 00:10:56.614 "name": "BaseBdev3", 00:10:56.614 "uuid": "39939334-bf1f-4edd-8045-90f2bf5da628", 00:10:56.614 "is_configured": true, 00:10:56.614 "data_offset": 2048, 00:10:56.614 "data_size": 63488 00:10:56.614 }, 00:10:56.614 { 00:10:56.614 "name": "BaseBdev4", 00:10:56.614 "uuid": "ccd146eb-348e-4f87-9cee-819f3c74017f", 00:10:56.614 "is_configured": true, 00:10:56.614 "data_offset": 2048, 00:10:56.614 "data_size": 63488 00:10:56.614 } 00:10:56.614 ] 00:10:56.614 }' 00:10:56.614 10:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.614 10:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.872 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.872 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.872 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.872 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.872 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.872 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.131 [2024-11-19 10:05:11.148597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.131 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.131 [2024-11-19 10:05:11.302178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.390 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.390 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.390 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.390 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.390 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.391 [2024-11-19 10:05:11.451828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:57.391 [2024-11-19 10:05:11.452050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.391 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 BaseBdev2 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 [ 00:10:57.650 { 00:10:57.650 "name": "BaseBdev2", 00:10:57.650 "aliases": [ 00:10:57.650 "cd7eb321-3ff1-4e58-9546-510f8ccf3b45" 00:10:57.650 ], 00:10:57.650 "product_name": "Malloc disk", 00:10:57.650 "block_size": 512, 00:10:57.650 "num_blocks": 65536, 00:10:57.650 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:10:57.650 "assigned_rate_limits": { 00:10:57.650 "rw_ios_per_sec": 0, 00:10:57.650 "rw_mbytes_per_sec": 0, 00:10:57.650 "r_mbytes_per_sec": 0, 00:10:57.650 "w_mbytes_per_sec": 0 00:10:57.650 }, 00:10:57.650 "claimed": false, 00:10:57.650 "zoned": false, 00:10:57.650 "supported_io_types": { 00:10:57.650 "read": true, 00:10:57.650 "write": true, 00:10:57.650 "unmap": true, 00:10:57.650 "flush": true, 00:10:57.650 "reset": true, 00:10:57.650 "nvme_admin": false, 00:10:57.650 "nvme_io": false, 00:10:57.650 "nvme_io_md": false, 00:10:57.650 "write_zeroes": true, 00:10:57.650 "zcopy": true, 00:10:57.650 "get_zone_info": false, 00:10:57.650 "zone_management": false, 00:10:57.650 "zone_append": false, 00:10:57.650 "compare": false, 00:10:57.650 "compare_and_write": false, 00:10:57.650 "abort": true, 00:10:57.650 "seek_hole": false, 00:10:57.650 "seek_data": false, 00:10:57.650 "copy": true, 00:10:57.650 "nvme_iov_md": false 00:10:57.650 }, 00:10:57.650 "memory_domains": [ 00:10:57.650 { 00:10:57.650 "dma_device_id": "system", 00:10:57.650 "dma_device_type": 1 00:10:57.650 }, 00:10:57.650 { 00:10:57.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.650 "dma_device_type": 2 00:10:57.650 } 00:10:57.650 ], 00:10:57.650 "driver_specific": {} 00:10:57.650 } 00:10:57.650 ] 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 BaseBdev3 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.650 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 [ 00:10:57.651 { 00:10:57.651 "name": "BaseBdev3", 00:10:57.651 "aliases": [ 00:10:57.651 "63af2351-34d3-4d99-81bc-e42af1d228c8" 00:10:57.651 ], 00:10:57.651 "product_name": "Malloc disk", 00:10:57.651 "block_size": 512, 00:10:57.651 "num_blocks": 65536, 00:10:57.651 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:10:57.651 "assigned_rate_limits": { 00:10:57.651 "rw_ios_per_sec": 0, 00:10:57.651 "rw_mbytes_per_sec": 0, 00:10:57.651 "r_mbytes_per_sec": 0, 00:10:57.651 "w_mbytes_per_sec": 0 00:10:57.651 }, 00:10:57.651 "claimed": false, 00:10:57.651 "zoned": false, 00:10:57.651 "supported_io_types": { 00:10:57.651 "read": true, 00:10:57.651 "write": true, 00:10:57.651 "unmap": true, 00:10:57.651 "flush": true, 00:10:57.651 "reset": true, 00:10:57.651 "nvme_admin": false, 00:10:57.651 "nvme_io": false, 00:10:57.651 "nvme_io_md": false, 00:10:57.651 "write_zeroes": true, 00:10:57.651 "zcopy": true, 00:10:57.651 "get_zone_info": false, 00:10:57.651 "zone_management": false, 00:10:57.651 "zone_append": false, 00:10:57.651 "compare": false, 00:10:57.651 "compare_and_write": false, 00:10:57.651 "abort": true, 00:10:57.651 "seek_hole": false, 00:10:57.651 "seek_data": false, 00:10:57.651 "copy": true, 00:10:57.651 "nvme_iov_md": false 00:10:57.651 }, 00:10:57.651 "memory_domains": [ 00:10:57.651 { 00:10:57.651 "dma_device_id": "system", 00:10:57.651 "dma_device_type": 1 00:10:57.651 }, 00:10:57.651 { 00:10:57.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.651 "dma_device_type": 2 00:10:57.651 } 00:10:57.651 ], 00:10:57.651 "driver_specific": {} 00:10:57.651 } 00:10:57.651 ] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 BaseBdev4 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 [ 00:10:57.651 { 00:10:57.651 "name": "BaseBdev4", 00:10:57.651 "aliases": [ 00:10:57.651 "1a86813a-cea4-4515-8f2d-a743e8c8590e" 00:10:57.651 ], 00:10:57.651 "product_name": "Malloc disk", 00:10:57.651 "block_size": 512, 00:10:57.651 "num_blocks": 65536, 00:10:57.651 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:10:57.651 "assigned_rate_limits": { 00:10:57.651 "rw_ios_per_sec": 0, 00:10:57.651 "rw_mbytes_per_sec": 0, 00:10:57.651 "r_mbytes_per_sec": 0, 00:10:57.651 "w_mbytes_per_sec": 0 00:10:57.651 }, 00:10:57.651 "claimed": false, 00:10:57.651 "zoned": false, 00:10:57.651 "supported_io_types": { 00:10:57.651 "read": true, 00:10:57.651 "write": true, 00:10:57.651 "unmap": true, 00:10:57.651 "flush": true, 00:10:57.651 "reset": true, 00:10:57.651 "nvme_admin": false, 00:10:57.651 "nvme_io": false, 00:10:57.651 "nvme_io_md": false, 00:10:57.651 "write_zeroes": true, 00:10:57.651 "zcopy": true, 00:10:57.651 "get_zone_info": false, 00:10:57.651 "zone_management": false, 00:10:57.651 "zone_append": false, 00:10:57.651 "compare": false, 00:10:57.651 "compare_and_write": false, 00:10:57.651 "abort": true, 00:10:57.651 "seek_hole": false, 00:10:57.651 "seek_data": false, 00:10:57.651 "copy": true, 00:10:57.651 "nvme_iov_md": false 00:10:57.651 }, 00:10:57.651 "memory_domains": [ 00:10:57.651 { 00:10:57.651 "dma_device_id": "system", 00:10:57.651 "dma_device_type": 1 00:10:57.651 }, 00:10:57.651 { 00:10:57.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.651 "dma_device_type": 2 00:10:57.651 } 00:10:57.651 ], 00:10:57.651 "driver_specific": {} 00:10:57.651 } 00:10:57.651 ] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 [2024-11-19 10:05:11.839298] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.651 [2024-11-19 10:05:11.839494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.651 [2024-11-19 10:05:11.839640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.651 [2024-11-19 10:05:11.842459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.651 [2024-11-19 10:05:11.842656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.910 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.910 "name": "Existed_Raid", 00:10:57.910 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:10:57.910 "strip_size_kb": 64, 00:10:57.910 "state": "configuring", 00:10:57.910 "raid_level": "raid0", 00:10:57.910 "superblock": true, 00:10:57.910 "num_base_bdevs": 4, 00:10:57.910 "num_base_bdevs_discovered": 3, 00:10:57.910 "num_base_bdevs_operational": 4, 00:10:57.910 "base_bdevs_list": [ 00:10:57.910 { 00:10:57.910 "name": "BaseBdev1", 00:10:57.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.910 "is_configured": false, 00:10:57.910 "data_offset": 0, 00:10:57.910 "data_size": 0 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev2", 00:10:57.910 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev3", 00:10:57.910 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev4", 00:10:57.910 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 } 00:10:57.910 ] 00:10:57.910 }' 00:10:57.910 10:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.910 10:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.168 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:58.168 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.168 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.168 [2024-11-19 10:05:12.355409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.169 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.427 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.427 "name": "Existed_Raid", 00:10:58.427 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:10:58.427 "strip_size_kb": 64, 00:10:58.427 "state": "configuring", 00:10:58.427 "raid_level": "raid0", 00:10:58.427 "superblock": true, 00:10:58.427 "num_base_bdevs": 4, 00:10:58.427 "num_base_bdevs_discovered": 2, 00:10:58.427 "num_base_bdevs_operational": 4, 00:10:58.427 "base_bdevs_list": [ 00:10:58.427 { 00:10:58.427 "name": "BaseBdev1", 00:10:58.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.427 "is_configured": false, 00:10:58.427 "data_offset": 0, 00:10:58.427 "data_size": 0 00:10:58.427 }, 00:10:58.427 { 00:10:58.427 "name": null, 00:10:58.428 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:10:58.428 "is_configured": false, 00:10:58.428 "data_offset": 0, 00:10:58.428 "data_size": 63488 00:10:58.428 }, 00:10:58.428 { 00:10:58.428 "name": "BaseBdev3", 00:10:58.428 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:10:58.428 "is_configured": true, 00:10:58.428 "data_offset": 2048, 00:10:58.428 "data_size": 63488 00:10:58.428 }, 00:10:58.428 { 00:10:58.428 "name": "BaseBdev4", 00:10:58.428 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:10:58.428 "is_configured": true, 00:10:58.428 "data_offset": 2048, 00:10:58.428 "data_size": 63488 00:10:58.428 } 00:10:58.428 ] 00:10:58.428 }' 00:10:58.428 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.428 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.687 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.687 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.687 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.687 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.687 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.946 [2024-11-19 10:05:12.969099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.946 BaseBdev1 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.946 [ 00:10:58.946 { 00:10:58.946 "name": "BaseBdev1", 00:10:58.946 "aliases": [ 00:10:58.946 "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778" 00:10:58.946 ], 00:10:58.946 "product_name": "Malloc disk", 00:10:58.946 "block_size": 512, 00:10:58.946 "num_blocks": 65536, 00:10:58.946 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:10:58.946 "assigned_rate_limits": { 00:10:58.946 "rw_ios_per_sec": 0, 00:10:58.946 "rw_mbytes_per_sec": 0, 00:10:58.946 "r_mbytes_per_sec": 0, 00:10:58.946 "w_mbytes_per_sec": 0 00:10:58.946 }, 00:10:58.946 "claimed": true, 00:10:58.946 "claim_type": "exclusive_write", 00:10:58.946 "zoned": false, 00:10:58.946 "supported_io_types": { 00:10:58.946 "read": true, 00:10:58.946 "write": true, 00:10:58.946 "unmap": true, 00:10:58.946 "flush": true, 00:10:58.946 "reset": true, 00:10:58.946 "nvme_admin": false, 00:10:58.946 "nvme_io": false, 00:10:58.946 "nvme_io_md": false, 00:10:58.946 "write_zeroes": true, 00:10:58.946 "zcopy": true, 00:10:58.946 "get_zone_info": false, 00:10:58.946 "zone_management": false, 00:10:58.946 "zone_append": false, 00:10:58.946 "compare": false, 00:10:58.946 "compare_and_write": false, 00:10:58.946 "abort": true, 00:10:58.946 "seek_hole": false, 00:10:58.946 "seek_data": false, 00:10:58.946 "copy": true, 00:10:58.946 "nvme_iov_md": false 00:10:58.946 }, 00:10:58.946 "memory_domains": [ 00:10:58.946 { 00:10:58.946 "dma_device_id": "system", 00:10:58.946 "dma_device_type": 1 00:10:58.946 }, 00:10:58.946 { 00:10:58.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.946 "dma_device_type": 2 00:10:58.946 } 00:10:58.946 ], 00:10:58.946 "driver_specific": {} 00:10:58.946 } 00:10:58.946 ] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.946 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.947 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.947 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.947 10:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.947 "name": "Existed_Raid", 00:10:58.947 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:10:58.947 "strip_size_kb": 64, 00:10:58.947 "state": "configuring", 00:10:58.947 "raid_level": "raid0", 00:10:58.947 "superblock": true, 00:10:58.947 "num_base_bdevs": 4, 00:10:58.947 "num_base_bdevs_discovered": 3, 00:10:58.947 "num_base_bdevs_operational": 4, 00:10:58.947 "base_bdevs_list": [ 00:10:58.947 { 00:10:58.947 "name": "BaseBdev1", 00:10:58.947 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:10:58.947 "is_configured": true, 00:10:58.947 "data_offset": 2048, 00:10:58.947 "data_size": 63488 00:10:58.947 }, 00:10:58.947 { 00:10:58.947 "name": null, 00:10:58.947 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:10:58.947 "is_configured": false, 00:10:58.947 "data_offset": 0, 00:10:58.947 "data_size": 63488 00:10:58.947 }, 00:10:58.947 { 00:10:58.947 "name": "BaseBdev3", 00:10:58.947 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:10:58.947 "is_configured": true, 00:10:58.947 "data_offset": 2048, 00:10:58.947 "data_size": 63488 00:10:58.947 }, 00:10:58.947 { 00:10:58.947 "name": "BaseBdev4", 00:10:58.947 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:10:58.947 "is_configured": true, 00:10:58.947 "data_offset": 2048, 00:10:58.947 "data_size": 63488 00:10:58.947 } 00:10:58.947 ] 00:10:58.947 }' 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.947 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.513 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.514 [2024-11-19 10:05:13.533392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.514 "name": "Existed_Raid", 00:10:59.514 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:10:59.514 "strip_size_kb": 64, 00:10:59.514 "state": "configuring", 00:10:59.514 "raid_level": "raid0", 00:10:59.514 "superblock": true, 00:10:59.514 "num_base_bdevs": 4, 00:10:59.514 "num_base_bdevs_discovered": 2, 00:10:59.514 "num_base_bdevs_operational": 4, 00:10:59.514 "base_bdevs_list": [ 00:10:59.514 { 00:10:59.514 "name": "BaseBdev1", 00:10:59.514 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:10:59.514 "is_configured": true, 00:10:59.514 "data_offset": 2048, 00:10:59.514 "data_size": 63488 00:10:59.514 }, 00:10:59.514 { 00:10:59.514 "name": null, 00:10:59.514 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:10:59.514 "is_configured": false, 00:10:59.514 "data_offset": 0, 00:10:59.514 "data_size": 63488 00:10:59.514 }, 00:10:59.514 { 00:10:59.514 "name": null, 00:10:59.514 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:10:59.514 "is_configured": false, 00:10:59.514 "data_offset": 0, 00:10:59.514 "data_size": 63488 00:10:59.514 }, 00:10:59.514 { 00:10:59.514 "name": "BaseBdev4", 00:10:59.514 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:10:59.514 "is_configured": true, 00:10:59.514 "data_offset": 2048, 00:10:59.514 "data_size": 63488 00:10:59.514 } 00:10:59.514 ] 00:10:59.514 }' 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.514 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.772 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.772 10:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.772 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.772 10:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.031 [2024-11-19 10:05:14.049505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.031 "name": "Existed_Raid", 00:11:00.031 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:11:00.031 "strip_size_kb": 64, 00:11:00.031 "state": "configuring", 00:11:00.031 "raid_level": "raid0", 00:11:00.031 "superblock": true, 00:11:00.031 "num_base_bdevs": 4, 00:11:00.031 "num_base_bdevs_discovered": 3, 00:11:00.031 "num_base_bdevs_operational": 4, 00:11:00.031 "base_bdevs_list": [ 00:11:00.031 { 00:11:00.031 "name": "BaseBdev1", 00:11:00.031 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:11:00.031 "is_configured": true, 00:11:00.031 "data_offset": 2048, 00:11:00.031 "data_size": 63488 00:11:00.031 }, 00:11:00.031 { 00:11:00.031 "name": null, 00:11:00.031 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:11:00.031 "is_configured": false, 00:11:00.031 "data_offset": 0, 00:11:00.031 "data_size": 63488 00:11:00.031 }, 00:11:00.031 { 00:11:00.031 "name": "BaseBdev3", 00:11:00.031 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:11:00.031 "is_configured": true, 00:11:00.031 "data_offset": 2048, 00:11:00.031 "data_size": 63488 00:11:00.031 }, 00:11:00.031 { 00:11:00.031 "name": "BaseBdev4", 00:11:00.031 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:11:00.031 "is_configured": true, 00:11:00.031 "data_offset": 2048, 00:11:00.031 "data_size": 63488 00:11:00.031 } 00:11:00.031 ] 00:11:00.031 }' 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.031 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.598 [2024-11-19 10:05:14.609685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.598 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.599 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.599 "name": "Existed_Raid", 00:11:00.599 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:11:00.599 "strip_size_kb": 64, 00:11:00.599 "state": "configuring", 00:11:00.599 "raid_level": "raid0", 00:11:00.599 "superblock": true, 00:11:00.599 "num_base_bdevs": 4, 00:11:00.599 "num_base_bdevs_discovered": 2, 00:11:00.599 "num_base_bdevs_operational": 4, 00:11:00.599 "base_bdevs_list": [ 00:11:00.599 { 00:11:00.599 "name": null, 00:11:00.599 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:11:00.599 "is_configured": false, 00:11:00.599 "data_offset": 0, 00:11:00.599 "data_size": 63488 00:11:00.599 }, 00:11:00.599 { 00:11:00.599 "name": null, 00:11:00.599 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:11:00.599 "is_configured": false, 00:11:00.599 "data_offset": 0, 00:11:00.599 "data_size": 63488 00:11:00.599 }, 00:11:00.599 { 00:11:00.599 "name": "BaseBdev3", 00:11:00.599 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:11:00.599 "is_configured": true, 00:11:00.599 "data_offset": 2048, 00:11:00.599 "data_size": 63488 00:11:00.599 }, 00:11:00.599 { 00:11:00.599 "name": "BaseBdev4", 00:11:00.599 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:11:00.599 "is_configured": true, 00:11:00.599 "data_offset": 2048, 00:11:00.599 "data_size": 63488 00:11:00.599 } 00:11:00.599 ] 00:11:00.599 }' 00:11:00.599 10:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.599 10:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.165 [2024-11-19 10:05:15.254475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.165 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.166 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.166 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.166 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.166 "name": "Existed_Raid", 00:11:01.166 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:11:01.166 "strip_size_kb": 64, 00:11:01.166 "state": "configuring", 00:11:01.166 "raid_level": "raid0", 00:11:01.166 "superblock": true, 00:11:01.166 "num_base_bdevs": 4, 00:11:01.166 "num_base_bdevs_discovered": 3, 00:11:01.166 "num_base_bdevs_operational": 4, 00:11:01.166 "base_bdevs_list": [ 00:11:01.166 { 00:11:01.166 "name": null, 00:11:01.166 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:11:01.166 "is_configured": false, 00:11:01.166 "data_offset": 0, 00:11:01.166 "data_size": 63488 00:11:01.166 }, 00:11:01.166 { 00:11:01.166 "name": "BaseBdev2", 00:11:01.166 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:11:01.166 "is_configured": true, 00:11:01.166 "data_offset": 2048, 00:11:01.166 "data_size": 63488 00:11:01.166 }, 00:11:01.166 { 00:11:01.166 "name": "BaseBdev3", 00:11:01.166 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:11:01.166 "is_configured": true, 00:11:01.166 "data_offset": 2048, 00:11:01.166 "data_size": 63488 00:11:01.166 }, 00:11:01.166 { 00:11:01.166 "name": "BaseBdev4", 00:11:01.166 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:11:01.166 "is_configured": true, 00:11:01.166 "data_offset": 2048, 00:11:01.166 "data_size": 63488 00:11:01.166 } 00:11:01.166 ] 00:11:01.166 }' 00:11:01.166 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.166 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce2c2604-5b6d-41d0-9f64-5ceaf00e9778 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.733 [2024-11-19 10:05:15.927776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.733 [2024-11-19 10:05:15.928185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:01.733 [2024-11-19 10:05:15.928204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:01.733 [2024-11-19 10:05:15.928543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:01.733 NewBaseBdev 00:11:01.733 [2024-11-19 10:05:15.928731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:01.733 [2024-11-19 10:05:15.928752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:01.733 [2024-11-19 10:05:15.928943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.733 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.733 [ 00:11:01.733 { 00:11:01.733 "name": "NewBaseBdev", 00:11:01.733 "aliases": [ 00:11:01.734 "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778" 00:11:01.734 ], 00:11:01.734 "product_name": "Malloc disk", 00:11:01.734 "block_size": 512, 00:11:01.734 "num_blocks": 65536, 00:11:01.734 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:11:01.734 "assigned_rate_limits": { 00:11:01.734 "rw_ios_per_sec": 0, 00:11:01.734 "rw_mbytes_per_sec": 0, 00:11:01.734 "r_mbytes_per_sec": 0, 00:11:01.734 "w_mbytes_per_sec": 0 00:11:01.734 }, 00:11:01.734 "claimed": true, 00:11:01.734 "claim_type": "exclusive_write", 00:11:01.734 "zoned": false, 00:11:01.734 "supported_io_types": { 00:11:01.734 "read": true, 00:11:01.734 "write": true, 00:11:01.734 "unmap": true, 00:11:01.734 "flush": true, 00:11:01.734 "reset": true, 00:11:01.734 "nvme_admin": false, 00:11:01.734 "nvme_io": false, 00:11:01.734 "nvme_io_md": false, 00:11:01.734 "write_zeroes": true, 00:11:01.734 "zcopy": true, 00:11:01.734 "get_zone_info": false, 00:11:01.734 "zone_management": false, 00:11:01.734 "zone_append": false, 00:11:01.734 "compare": false, 00:11:01.734 "compare_and_write": false, 00:11:01.734 "abort": true, 00:11:01.734 "seek_hole": false, 00:11:01.734 "seek_data": false, 00:11:01.734 "copy": true, 00:11:01.734 "nvme_iov_md": false 00:11:01.734 }, 00:11:01.734 "memory_domains": [ 00:11:01.734 { 00:11:01.734 "dma_device_id": "system", 00:11:01.734 "dma_device_type": 1 00:11:01.734 }, 00:11:01.734 { 00:11:01.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.734 "dma_device_type": 2 00:11:01.734 } 00:11:01.734 ], 00:11:01.734 "driver_specific": {} 00:11:01.734 } 00:11:01.734 ] 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.734 10:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.993 10:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.993 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.993 "name": "Existed_Raid", 00:11:01.993 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:11:01.993 "strip_size_kb": 64, 00:11:01.993 "state": "online", 00:11:01.993 "raid_level": "raid0", 00:11:01.993 "superblock": true, 00:11:01.993 "num_base_bdevs": 4, 00:11:01.993 "num_base_bdevs_discovered": 4, 00:11:01.993 "num_base_bdevs_operational": 4, 00:11:01.993 "base_bdevs_list": [ 00:11:01.993 { 00:11:01.993 "name": "NewBaseBdev", 00:11:01.993 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:11:01.993 "is_configured": true, 00:11:01.993 "data_offset": 2048, 00:11:01.993 "data_size": 63488 00:11:01.993 }, 00:11:01.993 { 00:11:01.993 "name": "BaseBdev2", 00:11:01.993 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:11:01.993 "is_configured": true, 00:11:01.993 "data_offset": 2048, 00:11:01.993 "data_size": 63488 00:11:01.993 }, 00:11:01.993 { 00:11:01.993 "name": "BaseBdev3", 00:11:01.993 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:11:01.993 "is_configured": true, 00:11:01.993 "data_offset": 2048, 00:11:01.993 "data_size": 63488 00:11:01.993 }, 00:11:01.993 { 00:11:01.993 "name": "BaseBdev4", 00:11:01.993 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:11:01.993 "is_configured": true, 00:11:01.993 "data_offset": 2048, 00:11:01.993 "data_size": 63488 00:11:01.993 } 00:11:01.993 ] 00:11:01.993 }' 00:11:01.993 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.993 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 [2024-11-19 10:05:16.508517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.560 "name": "Existed_Raid", 00:11:02.560 "aliases": [ 00:11:02.560 "d87fe3fb-deeb-401b-b8a7-863785d657fd" 00:11:02.560 ], 00:11:02.560 "product_name": "Raid Volume", 00:11:02.560 "block_size": 512, 00:11:02.560 "num_blocks": 253952, 00:11:02.560 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:11:02.560 "assigned_rate_limits": { 00:11:02.560 "rw_ios_per_sec": 0, 00:11:02.560 "rw_mbytes_per_sec": 0, 00:11:02.560 "r_mbytes_per_sec": 0, 00:11:02.560 "w_mbytes_per_sec": 0 00:11:02.560 }, 00:11:02.560 "claimed": false, 00:11:02.560 "zoned": false, 00:11:02.560 "supported_io_types": { 00:11:02.560 "read": true, 00:11:02.560 "write": true, 00:11:02.560 "unmap": true, 00:11:02.560 "flush": true, 00:11:02.560 "reset": true, 00:11:02.560 "nvme_admin": false, 00:11:02.560 "nvme_io": false, 00:11:02.560 "nvme_io_md": false, 00:11:02.560 "write_zeroes": true, 00:11:02.560 "zcopy": false, 00:11:02.560 "get_zone_info": false, 00:11:02.560 "zone_management": false, 00:11:02.560 "zone_append": false, 00:11:02.560 "compare": false, 00:11:02.560 "compare_and_write": false, 00:11:02.560 "abort": false, 00:11:02.560 "seek_hole": false, 00:11:02.560 "seek_data": false, 00:11:02.560 "copy": false, 00:11:02.560 "nvme_iov_md": false 00:11:02.560 }, 00:11:02.560 "memory_domains": [ 00:11:02.560 { 00:11:02.560 "dma_device_id": "system", 00:11:02.560 "dma_device_type": 1 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.560 "dma_device_type": 2 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "system", 00:11:02.560 "dma_device_type": 1 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.560 "dma_device_type": 2 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "system", 00:11:02.560 "dma_device_type": 1 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.560 "dma_device_type": 2 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "system", 00:11:02.560 "dma_device_type": 1 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.560 "dma_device_type": 2 00:11:02.560 } 00:11:02.560 ], 00:11:02.560 "driver_specific": { 00:11:02.560 "raid": { 00:11:02.560 "uuid": "d87fe3fb-deeb-401b-b8a7-863785d657fd", 00:11:02.560 "strip_size_kb": 64, 00:11:02.560 "state": "online", 00:11:02.560 "raid_level": "raid0", 00:11:02.560 "superblock": true, 00:11:02.560 "num_base_bdevs": 4, 00:11:02.560 "num_base_bdevs_discovered": 4, 00:11:02.560 "num_base_bdevs_operational": 4, 00:11:02.560 "base_bdevs_list": [ 00:11:02.560 { 00:11:02.560 "name": "NewBaseBdev", 00:11:02.560 "uuid": "ce2c2604-5b6d-41d0-9f64-5ceaf00e9778", 00:11:02.560 "is_configured": true, 00:11:02.560 "data_offset": 2048, 00:11:02.560 "data_size": 63488 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "name": "BaseBdev2", 00:11:02.560 "uuid": "cd7eb321-3ff1-4e58-9546-510f8ccf3b45", 00:11:02.560 "is_configured": true, 00:11:02.560 "data_offset": 2048, 00:11:02.560 "data_size": 63488 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "name": "BaseBdev3", 00:11:02.560 "uuid": "63af2351-34d3-4d99-81bc-e42af1d228c8", 00:11:02.560 "is_configured": true, 00:11:02.560 "data_offset": 2048, 00:11:02.560 "data_size": 63488 00:11:02.560 }, 00:11:02.560 { 00:11:02.560 "name": "BaseBdev4", 00:11:02.560 "uuid": "1a86813a-cea4-4515-8f2d-a743e8c8590e", 00:11:02.560 "is_configured": true, 00:11:02.560 "data_offset": 2048, 00:11:02.560 "data_size": 63488 00:11:02.560 } 00:11:02.560 ] 00:11:02.560 } 00:11:02.560 } 00:11:02.560 }' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:02.560 BaseBdev2 00:11:02.560 BaseBdev3 00:11:02.560 BaseBdev4' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.853 [2024-11-19 10:05:16.932188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.853 [2024-11-19 10:05:16.932249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.853 [2024-11-19 10:05:16.932382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.853 [2024-11-19 10:05:16.932492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.853 [2024-11-19 10:05:16.932510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70022 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70022 ']' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70022 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70022 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.853 killing process with pid 70022 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70022' 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70022 00:11:02.853 [2024-11-19 10:05:16.976260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.853 10:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70022 00:11:03.145 [2024-11-19 10:05:17.363078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.521 10:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.521 00:11:04.521 real 0m12.937s 00:11:04.521 user 0m21.179s 00:11:04.521 sys 0m1.850s 00:11:04.521 10:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.521 10:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.521 ************************************ 00:11:04.521 END TEST raid_state_function_test_sb 00:11:04.521 ************************************ 00:11:04.521 10:05:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:04.521 10:05:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:04.521 10:05:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.521 10:05:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.521 ************************************ 00:11:04.521 START TEST raid_superblock_test 00:11:04.521 ************************************ 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70699 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70699 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70699 ']' 00:11:04.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.521 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.522 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.522 10:05:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.522 [2024-11-19 10:05:18.662081] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:04.522 [2024-11-19 10:05:18.662339] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:11:04.780 [2024-11-19 10:05:18.858212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.037 [2024-11-19 10:05:19.042351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.037 [2024-11-19 10:05:19.268690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.037 [2024-11-19 10:05:19.268767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.604 malloc1 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.604 [2024-11-19 10:05:19.727200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:05.604 [2024-11-19 10:05:19.727302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.604 [2024-11-19 10:05:19.727338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:05.604 [2024-11-19 10:05:19.727354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.604 [2024-11-19 10:05:19.730504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.604 [2024-11-19 10:05:19.730696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:05.604 pt1 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.604 malloc2 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.604 [2024-11-19 10:05:19.783046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.604 [2024-11-19 10:05:19.783127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.604 [2024-11-19 10:05:19.783161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:05.604 [2024-11-19 10:05:19.783182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.604 [2024-11-19 10:05:19.786200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.604 [2024-11-19 10:05:19.786249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.604 pt2 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.604 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.863 malloc3 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.863 [2024-11-19 10:05:19.848933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.863 [2024-11-19 10:05:19.849016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.863 [2024-11-19 10:05:19.849051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:05.863 [2024-11-19 10:05:19.849067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.863 [2024-11-19 10:05:19.852254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.863 [2024-11-19 10:05:19.852473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.863 pt3 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.863 malloc4 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.863 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.863 [2024-11-19 10:05:19.908968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:05.863 [2024-11-19 10:05:19.909057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.863 [2024-11-19 10:05:19.909103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:05.863 [2024-11-19 10:05:19.909119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.863 [2024-11-19 10:05:19.912267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.863 [2024-11-19 10:05:19.912346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:05.863 pt4 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.864 [2024-11-19 10:05:19.921167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.864 [2024-11-19 10:05:19.924112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.864 [2024-11-19 10:05:19.924226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.864 [2024-11-19 10:05:19.924331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:05.864 [2024-11-19 10:05:19.924614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:05.864 [2024-11-19 10:05:19.924633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.864 [2024-11-19 10:05:19.925052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:05.864 [2024-11-19 10:05:19.925300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:05.864 [2024-11-19 10:05:19.925322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:05.864 [2024-11-19 10:05:19.925632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.864 "name": "raid_bdev1", 00:11:05.864 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:05.864 "strip_size_kb": 64, 00:11:05.864 "state": "online", 00:11:05.864 "raid_level": "raid0", 00:11:05.864 "superblock": true, 00:11:05.864 "num_base_bdevs": 4, 00:11:05.864 "num_base_bdevs_discovered": 4, 00:11:05.864 "num_base_bdevs_operational": 4, 00:11:05.864 "base_bdevs_list": [ 00:11:05.864 { 00:11:05.864 "name": "pt1", 00:11:05.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.864 "is_configured": true, 00:11:05.864 "data_offset": 2048, 00:11:05.864 "data_size": 63488 00:11:05.864 }, 00:11:05.864 { 00:11:05.864 "name": "pt2", 00:11:05.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.864 "is_configured": true, 00:11:05.864 "data_offset": 2048, 00:11:05.864 "data_size": 63488 00:11:05.864 }, 00:11:05.864 { 00:11:05.864 "name": "pt3", 00:11:05.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.864 "is_configured": true, 00:11:05.864 "data_offset": 2048, 00:11:05.864 "data_size": 63488 00:11:05.864 }, 00:11:05.864 { 00:11:05.864 "name": "pt4", 00:11:05.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.864 "is_configured": true, 00:11:05.864 "data_offset": 2048, 00:11:05.864 "data_size": 63488 00:11:05.864 } 00:11:05.864 ] 00:11:05.864 }' 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.864 10:05:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.430 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.431 [2024-11-19 10:05:20.450134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.431 "name": "raid_bdev1", 00:11:06.431 "aliases": [ 00:11:06.431 "208f15da-9269-4746-8bed-6f92d0510527" 00:11:06.431 ], 00:11:06.431 "product_name": "Raid Volume", 00:11:06.431 "block_size": 512, 00:11:06.431 "num_blocks": 253952, 00:11:06.431 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:06.431 "assigned_rate_limits": { 00:11:06.431 "rw_ios_per_sec": 0, 00:11:06.431 "rw_mbytes_per_sec": 0, 00:11:06.431 "r_mbytes_per_sec": 0, 00:11:06.431 "w_mbytes_per_sec": 0 00:11:06.431 }, 00:11:06.431 "claimed": false, 00:11:06.431 "zoned": false, 00:11:06.431 "supported_io_types": { 00:11:06.431 "read": true, 00:11:06.431 "write": true, 00:11:06.431 "unmap": true, 00:11:06.431 "flush": true, 00:11:06.431 "reset": true, 00:11:06.431 "nvme_admin": false, 00:11:06.431 "nvme_io": false, 00:11:06.431 "nvme_io_md": false, 00:11:06.431 "write_zeroes": true, 00:11:06.431 "zcopy": false, 00:11:06.431 "get_zone_info": false, 00:11:06.431 "zone_management": false, 00:11:06.431 "zone_append": false, 00:11:06.431 "compare": false, 00:11:06.431 "compare_and_write": false, 00:11:06.431 "abort": false, 00:11:06.431 "seek_hole": false, 00:11:06.431 "seek_data": false, 00:11:06.431 "copy": false, 00:11:06.431 "nvme_iov_md": false 00:11:06.431 }, 00:11:06.431 "memory_domains": [ 00:11:06.431 { 00:11:06.431 "dma_device_id": "system", 00:11:06.431 "dma_device_type": 1 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.431 "dma_device_type": 2 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "system", 00:11:06.431 "dma_device_type": 1 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.431 "dma_device_type": 2 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "system", 00:11:06.431 "dma_device_type": 1 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.431 "dma_device_type": 2 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "system", 00:11:06.431 "dma_device_type": 1 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.431 "dma_device_type": 2 00:11:06.431 } 00:11:06.431 ], 00:11:06.431 "driver_specific": { 00:11:06.431 "raid": { 00:11:06.431 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:06.431 "strip_size_kb": 64, 00:11:06.431 "state": "online", 00:11:06.431 "raid_level": "raid0", 00:11:06.431 "superblock": true, 00:11:06.431 "num_base_bdevs": 4, 00:11:06.431 "num_base_bdevs_discovered": 4, 00:11:06.431 "num_base_bdevs_operational": 4, 00:11:06.431 "base_bdevs_list": [ 00:11:06.431 { 00:11:06.431 "name": "pt1", 00:11:06.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.431 "is_configured": true, 00:11:06.431 "data_offset": 2048, 00:11:06.431 "data_size": 63488 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "name": "pt2", 00:11:06.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.431 "is_configured": true, 00:11:06.431 "data_offset": 2048, 00:11:06.431 "data_size": 63488 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "name": "pt3", 00:11:06.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.431 "is_configured": true, 00:11:06.431 "data_offset": 2048, 00:11:06.431 "data_size": 63488 00:11:06.431 }, 00:11:06.431 { 00:11:06.431 "name": "pt4", 00:11:06.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.431 "is_configured": true, 00:11:06.431 "data_offset": 2048, 00:11:06.431 "data_size": 63488 00:11:06.431 } 00:11:06.431 ] 00:11:06.431 } 00:11:06.431 } 00:11:06.431 }' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:06.431 pt2 00:11:06.431 pt3 00:11:06.431 pt4' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.431 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.690 [2024-11-19 10:05:20.814209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=208f15da-9269-4746-8bed-6f92d0510527 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 208f15da-9269-4746-8bed-6f92d0510527 ']' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.690 [2024-11-19 10:05:20.881838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.690 [2024-11-19 10:05:20.881884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.690 [2024-11-19 10:05:20.882011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.690 [2024-11-19 10:05:20.882112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.690 [2024-11-19 10:05:20.882136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.690 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.950 10:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.950 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.950 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.951 [2024-11-19 10:05:21.029888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:06.951 [2024-11-19 10:05:21.032572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:06.951 [2024-11-19 10:05:21.032645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:06.951 [2024-11-19 10:05:21.032703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:06.951 [2024-11-19 10:05:21.032815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:06.951 [2024-11-19 10:05:21.032903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:06.951 [2024-11-19 10:05:21.032938] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:06.951 [2024-11-19 10:05:21.032969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:06.951 [2024-11-19 10:05:21.032991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.951 [2024-11-19 10:05:21.033011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:06.951 request: 00:11:06.951 { 00:11:06.951 "name": "raid_bdev1", 00:11:06.951 "raid_level": "raid0", 00:11:06.951 "base_bdevs": [ 00:11:06.951 "malloc1", 00:11:06.951 "malloc2", 00:11:06.951 "malloc3", 00:11:06.951 "malloc4" 00:11:06.951 ], 00:11:06.951 "strip_size_kb": 64, 00:11:06.951 "superblock": false, 00:11:06.951 "method": "bdev_raid_create", 00:11:06.951 "req_id": 1 00:11:06.951 } 00:11:06.951 Got JSON-RPC error response 00:11:06.951 response: 00:11:06.951 { 00:11:06.951 "code": -17, 00:11:06.951 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:06.951 } 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.951 [2024-11-19 10:05:21.113943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.951 [2024-11-19 10:05:21.114173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.951 [2024-11-19 10:05:21.114353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:06.951 [2024-11-19 10:05:21.114484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.951 [2024-11-19 10:05:21.117637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.951 [2024-11-19 10:05:21.117822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.951 [2024-11-19 10:05:21.118065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:06.951 [2024-11-19 10:05:21.118274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.951 pt1 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.951 "name": "raid_bdev1", 00:11:06.951 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:06.951 "strip_size_kb": 64, 00:11:06.951 "state": "configuring", 00:11:06.951 "raid_level": "raid0", 00:11:06.951 "superblock": true, 00:11:06.951 "num_base_bdevs": 4, 00:11:06.951 "num_base_bdevs_discovered": 1, 00:11:06.951 "num_base_bdevs_operational": 4, 00:11:06.951 "base_bdevs_list": [ 00:11:06.951 { 00:11:06.951 "name": "pt1", 00:11:06.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.951 "is_configured": true, 00:11:06.951 "data_offset": 2048, 00:11:06.951 "data_size": 63488 00:11:06.951 }, 00:11:06.951 { 00:11:06.951 "name": null, 00:11:06.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.951 "is_configured": false, 00:11:06.951 "data_offset": 2048, 00:11:06.951 "data_size": 63488 00:11:06.951 }, 00:11:06.951 { 00:11:06.951 "name": null, 00:11:06.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.951 "is_configured": false, 00:11:06.951 "data_offset": 2048, 00:11:06.951 "data_size": 63488 00:11:06.951 }, 00:11:06.951 { 00:11:06.951 "name": null, 00:11:06.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.951 "is_configured": false, 00:11:06.951 "data_offset": 2048, 00:11:06.951 "data_size": 63488 00:11:06.951 } 00:11:06.951 ] 00:11:06.951 }' 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.951 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.519 [2024-11-19 10:05:21.638333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.519 [2024-11-19 10:05:21.638456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.519 [2024-11-19 10:05:21.638488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:07.519 [2024-11-19 10:05:21.638507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.519 [2024-11-19 10:05:21.639162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.519 [2024-11-19 10:05:21.639205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.519 [2024-11-19 10:05:21.639334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:07.519 [2024-11-19 10:05:21.639375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.519 pt2 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.519 [2024-11-19 10:05:21.650398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:07.519 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.520 "name": "raid_bdev1", 00:11:07.520 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:07.520 "strip_size_kb": 64, 00:11:07.520 "state": "configuring", 00:11:07.520 "raid_level": "raid0", 00:11:07.520 "superblock": true, 00:11:07.520 "num_base_bdevs": 4, 00:11:07.520 "num_base_bdevs_discovered": 1, 00:11:07.520 "num_base_bdevs_operational": 4, 00:11:07.520 "base_bdevs_list": [ 00:11:07.520 { 00:11:07.520 "name": "pt1", 00:11:07.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.520 "is_configured": true, 00:11:07.520 "data_offset": 2048, 00:11:07.520 "data_size": 63488 00:11:07.520 }, 00:11:07.520 { 00:11:07.520 "name": null, 00:11:07.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.520 "is_configured": false, 00:11:07.520 "data_offset": 0, 00:11:07.520 "data_size": 63488 00:11:07.520 }, 00:11:07.520 { 00:11:07.520 "name": null, 00:11:07.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.520 "is_configured": false, 00:11:07.520 "data_offset": 2048, 00:11:07.520 "data_size": 63488 00:11:07.520 }, 00:11:07.520 { 00:11:07.520 "name": null, 00:11:07.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.520 "is_configured": false, 00:11:07.520 "data_offset": 2048, 00:11:07.520 "data_size": 63488 00:11:07.520 } 00:11:07.520 ] 00:11:07.520 }' 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.520 10:05:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.087 [2024-11-19 10:05:22.178552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.087 [2024-11-19 10:05:22.178692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.087 [2024-11-19 10:05:22.178752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:08.087 [2024-11-19 10:05:22.178772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.087 [2024-11-19 10:05:22.179687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.087 [2024-11-19 10:05:22.179714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.087 [2024-11-19 10:05:22.179888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.087 [2024-11-19 10:05:22.179948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.087 pt2 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.087 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 [2024-11-19 10:05:22.186425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.088 [2024-11-19 10:05:22.186492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.088 [2024-11-19 10:05:22.186532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:08.088 [2024-11-19 10:05:22.186548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.088 [2024-11-19 10:05:22.187132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.088 [2024-11-19 10:05:22.187173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.088 [2024-11-19 10:05:22.187282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:08.088 [2024-11-19 10:05:22.187315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.088 pt3 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 [2024-11-19 10:05:22.198438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:08.088 [2024-11-19 10:05:22.198723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.088 [2024-11-19 10:05:22.198772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:08.088 [2024-11-19 10:05:22.198807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.088 [2024-11-19 10:05:22.199482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.088 [2024-11-19 10:05:22.199518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:08.088 [2024-11-19 10:05:22.199639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:08.088 [2024-11-19 10:05:22.199673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:08.088 [2024-11-19 10:05:22.199924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.088 [2024-11-19 10:05:22.199970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.088 [2024-11-19 10:05:22.200307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:08.088 [2024-11-19 10:05:22.200526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.088 [2024-11-19 10:05:22.200549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:08.088 [2024-11-19 10:05:22.200742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.088 pt4 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.088 "name": "raid_bdev1", 00:11:08.088 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:08.088 "strip_size_kb": 64, 00:11:08.088 "state": "online", 00:11:08.088 "raid_level": "raid0", 00:11:08.088 "superblock": true, 00:11:08.088 "num_base_bdevs": 4, 00:11:08.088 "num_base_bdevs_discovered": 4, 00:11:08.088 "num_base_bdevs_operational": 4, 00:11:08.088 "base_bdevs_list": [ 00:11:08.088 { 00:11:08.088 "name": "pt1", 00:11:08.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.088 "is_configured": true, 00:11:08.088 "data_offset": 2048, 00:11:08.088 "data_size": 63488 00:11:08.088 }, 00:11:08.088 { 00:11:08.088 "name": "pt2", 00:11:08.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.088 "is_configured": true, 00:11:08.088 "data_offset": 2048, 00:11:08.088 "data_size": 63488 00:11:08.088 }, 00:11:08.088 { 00:11:08.088 "name": "pt3", 00:11:08.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.088 "is_configured": true, 00:11:08.088 "data_offset": 2048, 00:11:08.088 "data_size": 63488 00:11:08.088 }, 00:11:08.088 { 00:11:08.088 "name": "pt4", 00:11:08.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.088 "is_configured": true, 00:11:08.088 "data_offset": 2048, 00:11:08.088 "data_size": 63488 00:11:08.088 } 00:11:08.088 ] 00:11:08.088 }' 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.088 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.655 [2024-11-19 10:05:22.767137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.655 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.655 "name": "raid_bdev1", 00:11:08.655 "aliases": [ 00:11:08.655 "208f15da-9269-4746-8bed-6f92d0510527" 00:11:08.655 ], 00:11:08.655 "product_name": "Raid Volume", 00:11:08.655 "block_size": 512, 00:11:08.655 "num_blocks": 253952, 00:11:08.655 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:08.655 "assigned_rate_limits": { 00:11:08.655 "rw_ios_per_sec": 0, 00:11:08.655 "rw_mbytes_per_sec": 0, 00:11:08.655 "r_mbytes_per_sec": 0, 00:11:08.655 "w_mbytes_per_sec": 0 00:11:08.655 }, 00:11:08.655 "claimed": false, 00:11:08.655 "zoned": false, 00:11:08.655 "supported_io_types": { 00:11:08.655 "read": true, 00:11:08.655 "write": true, 00:11:08.655 "unmap": true, 00:11:08.655 "flush": true, 00:11:08.655 "reset": true, 00:11:08.655 "nvme_admin": false, 00:11:08.655 "nvme_io": false, 00:11:08.655 "nvme_io_md": false, 00:11:08.655 "write_zeroes": true, 00:11:08.655 "zcopy": false, 00:11:08.655 "get_zone_info": false, 00:11:08.655 "zone_management": false, 00:11:08.655 "zone_append": false, 00:11:08.655 "compare": false, 00:11:08.655 "compare_and_write": false, 00:11:08.655 "abort": false, 00:11:08.655 "seek_hole": false, 00:11:08.656 "seek_data": false, 00:11:08.656 "copy": false, 00:11:08.656 "nvme_iov_md": false 00:11:08.656 }, 00:11:08.656 "memory_domains": [ 00:11:08.656 { 00:11:08.656 "dma_device_id": "system", 00:11:08.656 "dma_device_type": 1 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.656 "dma_device_type": 2 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "system", 00:11:08.656 "dma_device_type": 1 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.656 "dma_device_type": 2 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "system", 00:11:08.656 "dma_device_type": 1 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.656 "dma_device_type": 2 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "system", 00:11:08.656 "dma_device_type": 1 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.656 "dma_device_type": 2 00:11:08.656 } 00:11:08.656 ], 00:11:08.656 "driver_specific": { 00:11:08.656 "raid": { 00:11:08.656 "uuid": "208f15da-9269-4746-8bed-6f92d0510527", 00:11:08.656 "strip_size_kb": 64, 00:11:08.656 "state": "online", 00:11:08.656 "raid_level": "raid0", 00:11:08.656 "superblock": true, 00:11:08.656 "num_base_bdevs": 4, 00:11:08.656 "num_base_bdevs_discovered": 4, 00:11:08.656 "num_base_bdevs_operational": 4, 00:11:08.656 "base_bdevs_list": [ 00:11:08.656 { 00:11:08.656 "name": "pt1", 00:11:08.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.656 "is_configured": true, 00:11:08.656 "data_offset": 2048, 00:11:08.656 "data_size": 63488 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "name": "pt2", 00:11:08.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.656 "is_configured": true, 00:11:08.656 "data_offset": 2048, 00:11:08.656 "data_size": 63488 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "name": "pt3", 00:11:08.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.656 "is_configured": true, 00:11:08.656 "data_offset": 2048, 00:11:08.656 "data_size": 63488 00:11:08.656 }, 00:11:08.656 { 00:11:08.656 "name": "pt4", 00:11:08.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.656 "is_configured": true, 00:11:08.656 "data_offset": 2048, 00:11:08.656 "data_size": 63488 00:11:08.656 } 00:11:08.656 ] 00:11:08.656 } 00:11:08.656 } 00:11:08.656 }' 00:11:08.656 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.656 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:08.656 pt2 00:11:08.656 pt3 00:11:08.656 pt4' 00:11:08.656 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.914 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.914 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.914 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:08.914 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:08.915 10:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.915 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.915 [2024-11-19 10:05:23.127106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 208f15da-9269-4746-8bed-6f92d0510527 '!=' 208f15da-9269-4746-8bed-6f92d0510527 ']' 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70699 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70699 ']' 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70699 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70699 00:11:09.173 killing process with pid 70699 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70699' 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70699 00:11:09.173 10:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70699 00:11:09.173 [2024-11-19 10:05:23.203849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.173 [2024-11-19 10:05:23.204027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.173 [2024-11-19 10:05:23.204138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.173 [2024-11-19 10:05:23.204155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:09.432 [2024-11-19 10:05:23.592289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.808 ************************************ 00:11:10.808 END TEST raid_superblock_test 00:11:10.808 ************************************ 00:11:10.808 10:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:10.808 00:11:10.808 real 0m6.175s 00:11:10.808 user 0m9.146s 00:11:10.808 sys 0m0.989s 00:11:10.808 10:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.808 10:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 10:05:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:10.808 10:05:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:10.808 10:05:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.808 10:05:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 ************************************ 00:11:10.808 START TEST raid_read_error_test 00:11:10.808 ************************************ 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L9knHJnR1T 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70971 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70971 00:11:10.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70971 ']' 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.808 10:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 [2024-11-19 10:05:24.919261] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:10.808 [2024-11-19 10:05:24.919518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70971 ] 00:11:11.067 [2024-11-19 10:05:25.106118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.067 [2024-11-19 10:05:25.251648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.325 [2024-11-19 10:05:25.477676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.325 [2024-11-19 10:05:25.477751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.628 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.628 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:11.628 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.628 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:11.628 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.628 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.888 BaseBdev1_malloc 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.888 true 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.888 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.888 [2024-11-19 10:05:25.896063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:11.888 [2024-11-19 10:05:25.896147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.888 [2024-11-19 10:05:25.896182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:11.888 [2024-11-19 10:05:25.896202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.888 [2024-11-19 10:05:25.899312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.888 [2024-11-19 10:05:25.899538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:11.888 BaseBdev1 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 BaseBdev2_malloc 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 true 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 [2024-11-19 10:05:25.960038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:11.889 [2024-11-19 10:05:25.960126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.889 [2024-11-19 10:05:25.960158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:11.889 [2024-11-19 10:05:25.960177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.889 [2024-11-19 10:05:25.963321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.889 [2024-11-19 10:05:25.963377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:11.889 BaseBdev2 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 BaseBdev3_malloc 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 true 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 [2024-11-19 10:05:26.035591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:11.889 [2024-11-19 10:05:26.035697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.889 [2024-11-19 10:05:26.035736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:11.889 [2024-11-19 10:05:26.035755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.889 [2024-11-19 10:05:26.039005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.889 [2024-11-19 10:05:26.039063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:11.889 BaseBdev3 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 BaseBdev4_malloc 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 true 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 [2024-11-19 10:05:26.108647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:11.889 [2024-11-19 10:05:26.108758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.889 [2024-11-19 10:05:26.108807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:11.889 [2024-11-19 10:05:26.108828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.889 [2024-11-19 10:05:26.112008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.889 [2024-11-19 10:05:26.112075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:11.889 BaseBdev4 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.889 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.149 [2024-11-19 10:05:26.121011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.149 [2024-11-19 10:05:26.123946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.149 [2024-11-19 10:05:26.124093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.149 [2024-11-19 10:05:26.124208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.149 [2024-11-19 10:05:26.124582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:12.149 [2024-11-19 10:05:26.124609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.149 [2024-11-19 10:05:26.125015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:12.149 [2024-11-19 10:05:26.125255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:12.149 [2024-11-19 10:05:26.125407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:12.149 [2024-11-19 10:05:26.125739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.149 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.149 "name": "raid_bdev1", 00:11:12.149 "uuid": "0b75dd45-5587-4d31-b189-648dc192d8a4", 00:11:12.149 "strip_size_kb": 64, 00:11:12.149 "state": "online", 00:11:12.149 "raid_level": "raid0", 00:11:12.149 "superblock": true, 00:11:12.149 "num_base_bdevs": 4, 00:11:12.149 "num_base_bdevs_discovered": 4, 00:11:12.149 "num_base_bdevs_operational": 4, 00:11:12.149 "base_bdevs_list": [ 00:11:12.149 { 00:11:12.149 "name": "BaseBdev1", 00:11:12.149 "uuid": "949ba532-9c6a-5dbe-a13c-22105ec0336a", 00:11:12.149 "is_configured": true, 00:11:12.149 "data_offset": 2048, 00:11:12.149 "data_size": 63488 00:11:12.149 }, 00:11:12.149 { 00:11:12.149 "name": "BaseBdev2", 00:11:12.149 "uuid": "612a1994-8207-52c6-a302-58f0543a157e", 00:11:12.149 "is_configured": true, 00:11:12.149 "data_offset": 2048, 00:11:12.149 "data_size": 63488 00:11:12.149 }, 00:11:12.149 { 00:11:12.149 "name": "BaseBdev3", 00:11:12.149 "uuid": "98a6d101-d0b9-5508-a0b2-823da94e31d9", 00:11:12.149 "is_configured": true, 00:11:12.149 "data_offset": 2048, 00:11:12.149 "data_size": 63488 00:11:12.149 }, 00:11:12.149 { 00:11:12.149 "name": "BaseBdev4", 00:11:12.149 "uuid": "8a41cc3a-d735-5b42-b282-f4cff7a8e40d", 00:11:12.149 "is_configured": true, 00:11:12.149 "data_offset": 2048, 00:11:12.149 "data_size": 63488 00:11:12.149 } 00:11:12.149 ] 00:11:12.149 }' 00:11:12.150 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.150 10:05:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.409 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:12.409 10:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.668 [2024-11-19 10:05:26.715335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.604 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.604 "name": "raid_bdev1", 00:11:13.604 "uuid": "0b75dd45-5587-4d31-b189-648dc192d8a4", 00:11:13.604 "strip_size_kb": 64, 00:11:13.604 "state": "online", 00:11:13.604 "raid_level": "raid0", 00:11:13.604 "superblock": true, 00:11:13.604 "num_base_bdevs": 4, 00:11:13.604 "num_base_bdevs_discovered": 4, 00:11:13.604 "num_base_bdevs_operational": 4, 00:11:13.604 "base_bdevs_list": [ 00:11:13.604 { 00:11:13.604 "name": "BaseBdev1", 00:11:13.604 "uuid": "949ba532-9c6a-5dbe-a13c-22105ec0336a", 00:11:13.604 "is_configured": true, 00:11:13.604 "data_offset": 2048, 00:11:13.604 "data_size": 63488 00:11:13.604 }, 00:11:13.604 { 00:11:13.604 "name": "BaseBdev2", 00:11:13.604 "uuid": "612a1994-8207-52c6-a302-58f0543a157e", 00:11:13.604 "is_configured": true, 00:11:13.604 "data_offset": 2048, 00:11:13.604 "data_size": 63488 00:11:13.604 }, 00:11:13.604 { 00:11:13.604 "name": "BaseBdev3", 00:11:13.605 "uuid": "98a6d101-d0b9-5508-a0b2-823da94e31d9", 00:11:13.605 "is_configured": true, 00:11:13.605 "data_offset": 2048, 00:11:13.605 "data_size": 63488 00:11:13.605 }, 00:11:13.605 { 00:11:13.605 "name": "BaseBdev4", 00:11:13.605 "uuid": "8a41cc3a-d735-5b42-b282-f4cff7a8e40d", 00:11:13.605 "is_configured": true, 00:11:13.605 "data_offset": 2048, 00:11:13.605 "data_size": 63488 00:11:13.605 } 00:11:13.605 ] 00:11:13.605 }' 00:11:13.605 10:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.605 10:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.172 10:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.172 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.172 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.172 [2024-11-19 10:05:28.195110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.172 [2024-11-19 10:05:28.195322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.172 [2024-11-19 10:05:28.198916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.172 [2024-11-19 10:05:28.199019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.172 [2024-11-19 10:05:28.199092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.172 [2024-11-19 10:05:28.199122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:14.172 { 00:11:14.172 "results": [ 00:11:14.172 { 00:11:14.172 "job": "raid_bdev1", 00:11:14.172 "core_mask": "0x1", 00:11:14.172 "workload": "randrw", 00:11:14.172 "percentage": 50, 00:11:14.172 "status": "finished", 00:11:14.172 "queue_depth": 1, 00:11:14.172 "io_size": 131072, 00:11:14.172 "runtime": 1.477338, 00:11:14.172 "iops": 9680.249204988973, 00:11:14.172 "mibps": 1210.0311506236217, 00:11:14.172 "io_failed": 1, 00:11:14.172 "io_timeout": 0, 00:11:14.172 "avg_latency_us": 145.89623422026162, 00:11:14.172 "min_latency_us": 43.985454545454544, 00:11:14.172 "max_latency_us": 1891.6072727272726 00:11:14.172 } 00:11:14.172 ], 00:11:14.172 "core_count": 1 00:11:14.172 } 00:11:14.172 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.172 10:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70971 00:11:14.172 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70971 ']' 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70971 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70971 00:11:14.173 killing process with pid 70971 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70971' 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70971 00:11:14.173 10:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70971 00:11:14.173 [2024-11-19 10:05:28.232035] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.431 [2024-11-19 10:05:28.557528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L9knHJnR1T 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:15.806 ************************************ 00:11:15.806 END TEST raid_read_error_test 00:11:15.806 ************************************ 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:11:15.806 00:11:15.806 real 0m5.039s 00:11:15.806 user 0m6.035s 00:11:15.806 sys 0m0.677s 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.806 10:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.806 10:05:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:15.806 10:05:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:15.806 10:05:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.806 10:05:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.806 ************************************ 00:11:15.806 START TEST raid_write_error_test 00:11:15.806 ************************************ 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WAFcYWR4PA 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71118 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71118 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71118 ']' 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.806 10:05:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.806 [2024-11-19 10:05:29.962693] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:15.806 [2024-11-19 10:05:29.962936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:11:16.064 [2024-11-19 10:05:30.141846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.064 [2024-11-19 10:05:30.293021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.322 [2024-11-19 10:05:30.520936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.322 [2024-11-19 10:05:30.521039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.889 BaseBdev1_malloc 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.889 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 true 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 [2024-11-19 10:05:31.129249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.148 [2024-11-19 10:05:31.129374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.148 [2024-11-19 10:05:31.129419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.148 [2024-11-19 10:05:31.129440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.148 [2024-11-19 10:05:31.132761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.148 [2024-11-19 10:05:31.132847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.148 BaseBdev1 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 BaseBdev2_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 true 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 [2024-11-19 10:05:31.198014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.148 [2024-11-19 10:05:31.198110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.148 [2024-11-19 10:05:31.198147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.148 [2024-11-19 10:05:31.198166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.148 [2024-11-19 10:05:31.201390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.148 [2024-11-19 10:05:31.201463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.148 BaseBdev2 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 BaseBdev3_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 true 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 [2024-11-19 10:05:31.280163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.148 [2024-11-19 10:05:31.280273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.148 [2024-11-19 10:05:31.280313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.148 [2024-11-19 10:05:31.280333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.148 [2024-11-19 10:05:31.283615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.148 [2024-11-19 10:05:31.283850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.148 BaseBdev3 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 BaseBdev4_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.148 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.148 true 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.149 [2024-11-19 10:05:31.352474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.149 [2024-11-19 10:05:31.352579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.149 [2024-11-19 10:05:31.352619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.149 [2024-11-19 10:05:31.352640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.149 [2024-11-19 10:05:31.355921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.149 [2024-11-19 10:05:31.355993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.149 BaseBdev4 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.149 [2024-11-19 10:05:31.364636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.149 [2024-11-19 10:05:31.367759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.149 [2024-11-19 10:05:31.368132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.149 [2024-11-19 10:05:31.368265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.149 [2024-11-19 10:05:31.368677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:17.149 [2024-11-19 10:05:31.368703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.149 [2024-11-19 10:05:31.369122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:17.149 [2024-11-19 10:05:31.369382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:17.149 [2024-11-19 10:05:31.369404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:17.149 [2024-11-19 10:05:31.369661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.149 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.407 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.407 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.407 "name": "raid_bdev1", 00:11:17.407 "uuid": "e968ff43-8e72-48da-a152-ab56d73b0f9f", 00:11:17.407 "strip_size_kb": 64, 00:11:17.407 "state": "online", 00:11:17.407 "raid_level": "raid0", 00:11:17.407 "superblock": true, 00:11:17.407 "num_base_bdevs": 4, 00:11:17.407 "num_base_bdevs_discovered": 4, 00:11:17.407 "num_base_bdevs_operational": 4, 00:11:17.407 "base_bdevs_list": [ 00:11:17.407 { 00:11:17.407 "name": "BaseBdev1", 00:11:17.407 "uuid": "2fb49eea-9c86-585e-9475-5cb184792e52", 00:11:17.407 "is_configured": true, 00:11:17.407 "data_offset": 2048, 00:11:17.407 "data_size": 63488 00:11:17.407 }, 00:11:17.407 { 00:11:17.407 "name": "BaseBdev2", 00:11:17.407 "uuid": "0e7c4c1e-d0e4-53b4-be97-302ddab45a8f", 00:11:17.407 "is_configured": true, 00:11:17.407 "data_offset": 2048, 00:11:17.407 "data_size": 63488 00:11:17.407 }, 00:11:17.407 { 00:11:17.407 "name": "BaseBdev3", 00:11:17.407 "uuid": "63bb89f0-bb0c-5add-8a85-940112afcd24", 00:11:17.407 "is_configured": true, 00:11:17.407 "data_offset": 2048, 00:11:17.407 "data_size": 63488 00:11:17.407 }, 00:11:17.407 { 00:11:17.407 "name": "BaseBdev4", 00:11:17.407 "uuid": "36bd9ea3-cc9c-5695-9dd5-f03840c355b3", 00:11:17.407 "is_configured": true, 00:11:17.407 "data_offset": 2048, 00:11:17.407 "data_size": 63488 00:11:17.407 } 00:11:17.407 ] 00:11:17.407 }' 00:11:17.407 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.407 10:05:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.666 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:17.666 10:05:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:17.924 [2024-11-19 10:05:32.046292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.867 "name": "raid_bdev1", 00:11:18.867 "uuid": "e968ff43-8e72-48da-a152-ab56d73b0f9f", 00:11:18.867 "strip_size_kb": 64, 00:11:18.867 "state": "online", 00:11:18.867 "raid_level": "raid0", 00:11:18.867 "superblock": true, 00:11:18.867 "num_base_bdevs": 4, 00:11:18.867 "num_base_bdevs_discovered": 4, 00:11:18.867 "num_base_bdevs_operational": 4, 00:11:18.867 "base_bdevs_list": [ 00:11:18.867 { 00:11:18.867 "name": "BaseBdev1", 00:11:18.867 "uuid": "2fb49eea-9c86-585e-9475-5cb184792e52", 00:11:18.867 "is_configured": true, 00:11:18.867 "data_offset": 2048, 00:11:18.867 "data_size": 63488 00:11:18.867 }, 00:11:18.867 { 00:11:18.867 "name": "BaseBdev2", 00:11:18.867 "uuid": "0e7c4c1e-d0e4-53b4-be97-302ddab45a8f", 00:11:18.867 "is_configured": true, 00:11:18.867 "data_offset": 2048, 00:11:18.867 "data_size": 63488 00:11:18.867 }, 00:11:18.867 { 00:11:18.867 "name": "BaseBdev3", 00:11:18.867 "uuid": "63bb89f0-bb0c-5add-8a85-940112afcd24", 00:11:18.867 "is_configured": true, 00:11:18.867 "data_offset": 2048, 00:11:18.867 "data_size": 63488 00:11:18.867 }, 00:11:18.867 { 00:11:18.867 "name": "BaseBdev4", 00:11:18.867 "uuid": "36bd9ea3-cc9c-5695-9dd5-f03840c355b3", 00:11:18.867 "is_configured": true, 00:11:18.867 "data_offset": 2048, 00:11:18.867 "data_size": 63488 00:11:18.867 } 00:11:18.867 ] 00:11:18.867 }' 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.867 10:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 [2024-11-19 10:05:33.489066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.433 [2024-11-19 10:05:33.489310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.433 [2024-11-19 10:05:33.492684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.433 [2024-11-19 10:05:33.492922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.433 [2024-11-19 10:05:33.493004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.433 [2024-11-19 10:05:33.493025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:19.433 { 00:11:19.433 "results": [ 00:11:19.433 { 00:11:19.433 "job": "raid_bdev1", 00:11:19.433 "core_mask": "0x1", 00:11:19.433 "workload": "randrw", 00:11:19.433 "percentage": 50, 00:11:19.433 "status": "finished", 00:11:19.433 "queue_depth": 1, 00:11:19.433 "io_size": 131072, 00:11:19.433 "runtime": 1.440026, 00:11:19.433 "iops": 9566.49393830389, 00:11:19.433 "mibps": 1195.8117422879864, 00:11:19.433 "io_failed": 1, 00:11:19.433 "io_timeout": 0, 00:11:19.433 "avg_latency_us": 147.7690981675652, 00:11:19.433 "min_latency_us": 44.45090909090909, 00:11:19.433 "max_latency_us": 1861.8181818181818 00:11:19.433 } 00:11:19.433 ], 00:11:19.433 "core_count": 1 00:11:19.433 } 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71118 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71118 ']' 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71118 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71118 00:11:19.433 killing process with pid 71118 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71118' 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71118 00:11:19.433 10:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71118 00:11:19.433 [2024-11-19 10:05:33.522981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.693 [2024-11-19 10:05:33.841542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WAFcYWR4PA 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:21.068 ************************************ 00:11:21.068 END TEST raid_write_error_test 00:11:21.068 ************************************ 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:21.068 00:11:21.068 real 0m5.174s 00:11:21.068 user 0m6.410s 00:11:21.068 sys 0m0.655s 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.068 10:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.068 10:05:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:21.068 10:05:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:21.068 10:05:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.068 10:05:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.069 10:05:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.069 ************************************ 00:11:21.069 START TEST raid_state_function_test 00:11:21.069 ************************************ 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71275 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71275' 00:11:21.069 Process raid pid: 71275 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71275 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71275 ']' 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.069 10:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.069 [2024-11-19 10:05:35.204501] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:21.069 [2024-11-19 10:05:35.204684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.327 [2024-11-19 10:05:35.392354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.327 [2024-11-19 10:05:35.542620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.585 [2024-11-19 10:05:35.774941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.585 [2024-11-19 10:05:35.775018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 [2024-11-19 10:05:36.266805] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.153 [2024-11-19 10:05:36.266908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.153 [2024-11-19 10:05:36.266928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.153 [2024-11-19 10:05:36.266946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.153 [2024-11-19 10:05:36.266955] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.153 [2024-11-19 10:05:36.266971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.153 [2024-11-19 10:05:36.266981] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.153 [2024-11-19 10:05:36.266995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.153 "name": "Existed_Raid", 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "strip_size_kb": 64, 00:11:22.153 "state": "configuring", 00:11:22.153 "raid_level": "concat", 00:11:22.153 "superblock": false, 00:11:22.153 "num_base_bdevs": 4, 00:11:22.153 "num_base_bdevs_discovered": 0, 00:11:22.153 "num_base_bdevs_operational": 4, 00:11:22.153 "base_bdevs_list": [ 00:11:22.153 { 00:11:22.153 "name": "BaseBdev1", 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "is_configured": false, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 0 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev2", 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "is_configured": false, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 0 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev3", 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "is_configured": false, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 0 00:11:22.153 }, 00:11:22.153 { 00:11:22.153 "name": "BaseBdev4", 00:11:22.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.153 "is_configured": false, 00:11:22.153 "data_offset": 0, 00:11:22.153 "data_size": 0 00:11:22.153 } 00:11:22.153 ] 00:11:22.153 }' 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.153 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 [2024-11-19 10:05:36.815011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.720 [2024-11-19 10:05:36.815116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.720 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 [2024-11-19 10:05:36.822913] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.720 [2024-11-19 10:05:36.823001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.721 [2024-11-19 10:05:36.823018] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.721 [2024-11-19 10:05:36.823035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.721 [2024-11-19 10:05:36.823045] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.721 [2024-11-19 10:05:36.823060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.721 [2024-11-19 10:05:36.823070] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.721 [2024-11-19 10:05:36.823084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 [2024-11-19 10:05:36.873589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.721 BaseBdev1 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 [ 00:11:22.721 { 00:11:22.721 "name": "BaseBdev1", 00:11:22.721 "aliases": [ 00:11:22.721 "271debbc-72a4-4eb3-9ea6-a69eb024ba26" 00:11:22.721 ], 00:11:22.721 "product_name": "Malloc disk", 00:11:22.721 "block_size": 512, 00:11:22.721 "num_blocks": 65536, 00:11:22.721 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:22.721 "assigned_rate_limits": { 00:11:22.721 "rw_ios_per_sec": 0, 00:11:22.721 "rw_mbytes_per_sec": 0, 00:11:22.721 "r_mbytes_per_sec": 0, 00:11:22.721 "w_mbytes_per_sec": 0 00:11:22.721 }, 00:11:22.721 "claimed": true, 00:11:22.721 "claim_type": "exclusive_write", 00:11:22.721 "zoned": false, 00:11:22.721 "supported_io_types": { 00:11:22.721 "read": true, 00:11:22.721 "write": true, 00:11:22.721 "unmap": true, 00:11:22.721 "flush": true, 00:11:22.721 "reset": true, 00:11:22.721 "nvme_admin": false, 00:11:22.721 "nvme_io": false, 00:11:22.721 "nvme_io_md": false, 00:11:22.721 "write_zeroes": true, 00:11:22.721 "zcopy": true, 00:11:22.721 "get_zone_info": false, 00:11:22.721 "zone_management": false, 00:11:22.721 "zone_append": false, 00:11:22.721 "compare": false, 00:11:22.721 "compare_and_write": false, 00:11:22.721 "abort": true, 00:11:22.721 "seek_hole": false, 00:11:22.721 "seek_data": false, 00:11:22.721 "copy": true, 00:11:22.721 "nvme_iov_md": false 00:11:22.721 }, 00:11:22.721 "memory_domains": [ 00:11:22.721 { 00:11:22.721 "dma_device_id": "system", 00:11:22.721 "dma_device_type": 1 00:11:22.721 }, 00:11:22.721 { 00:11:22.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.721 "dma_device_type": 2 00:11:22.721 } 00:11:22.721 ], 00:11:22.721 "driver_specific": {} 00:11:22.721 } 00:11:22.721 ] 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.980 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.980 "name": "Existed_Raid", 00:11:22.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.980 "strip_size_kb": 64, 00:11:22.980 "state": "configuring", 00:11:22.980 "raid_level": "concat", 00:11:22.980 "superblock": false, 00:11:22.980 "num_base_bdevs": 4, 00:11:22.980 "num_base_bdevs_discovered": 1, 00:11:22.980 "num_base_bdevs_operational": 4, 00:11:22.980 "base_bdevs_list": [ 00:11:22.980 { 00:11:22.980 "name": "BaseBdev1", 00:11:22.980 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:22.980 "is_configured": true, 00:11:22.980 "data_offset": 0, 00:11:22.980 "data_size": 65536 00:11:22.980 }, 00:11:22.980 { 00:11:22.980 "name": "BaseBdev2", 00:11:22.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.980 "is_configured": false, 00:11:22.980 "data_offset": 0, 00:11:22.980 "data_size": 0 00:11:22.980 }, 00:11:22.980 { 00:11:22.980 "name": "BaseBdev3", 00:11:22.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.980 "is_configured": false, 00:11:22.980 "data_offset": 0, 00:11:22.980 "data_size": 0 00:11:22.980 }, 00:11:22.980 { 00:11:22.980 "name": "BaseBdev4", 00:11:22.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.980 "is_configured": false, 00:11:22.980 "data_offset": 0, 00:11:22.980 "data_size": 0 00:11:22.980 } 00:11:22.980 ] 00:11:22.980 }' 00:11:22.980 10:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.980 10:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.547 [2024-11-19 10:05:37.485862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.547 [2024-11-19 10:05:37.486129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.547 [2024-11-19 10:05:37.497989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.547 [2024-11-19 10:05:37.501023] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.547 [2024-11-19 10:05:37.501255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.547 [2024-11-19 10:05:37.501401] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.547 [2024-11-19 10:05:37.501475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.547 [2024-11-19 10:05:37.501493] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.547 [2024-11-19 10:05:37.501509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.547 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.547 "name": "Existed_Raid", 00:11:23.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.547 "strip_size_kb": 64, 00:11:23.547 "state": "configuring", 00:11:23.547 "raid_level": "concat", 00:11:23.547 "superblock": false, 00:11:23.547 "num_base_bdevs": 4, 00:11:23.547 "num_base_bdevs_discovered": 1, 00:11:23.547 "num_base_bdevs_operational": 4, 00:11:23.547 "base_bdevs_list": [ 00:11:23.547 { 00:11:23.547 "name": "BaseBdev1", 00:11:23.547 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:23.547 "is_configured": true, 00:11:23.547 "data_offset": 0, 00:11:23.547 "data_size": 65536 00:11:23.547 }, 00:11:23.547 { 00:11:23.547 "name": "BaseBdev2", 00:11:23.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.547 "is_configured": false, 00:11:23.547 "data_offset": 0, 00:11:23.547 "data_size": 0 00:11:23.547 }, 00:11:23.547 { 00:11:23.547 "name": "BaseBdev3", 00:11:23.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.547 "is_configured": false, 00:11:23.547 "data_offset": 0, 00:11:23.547 "data_size": 0 00:11:23.547 }, 00:11:23.547 { 00:11:23.547 "name": "BaseBdev4", 00:11:23.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.547 "is_configured": false, 00:11:23.547 "data_offset": 0, 00:11:23.547 "data_size": 0 00:11:23.547 } 00:11:23.547 ] 00:11:23.547 }' 00:11:23.548 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.548 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.806 10:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.806 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.806 10:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.806 [2024-11-19 10:05:38.024477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.806 BaseBdev2 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.806 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.065 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.065 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.065 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.065 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.065 [ 00:11:24.065 { 00:11:24.065 "name": "BaseBdev2", 00:11:24.065 "aliases": [ 00:11:24.065 "e87e2d19-b90c-4a50-9cc2-91e1978f0284" 00:11:24.065 ], 00:11:24.065 "product_name": "Malloc disk", 00:11:24.065 "block_size": 512, 00:11:24.065 "num_blocks": 65536, 00:11:24.065 "uuid": "e87e2d19-b90c-4a50-9cc2-91e1978f0284", 00:11:24.065 "assigned_rate_limits": { 00:11:24.065 "rw_ios_per_sec": 0, 00:11:24.065 "rw_mbytes_per_sec": 0, 00:11:24.065 "r_mbytes_per_sec": 0, 00:11:24.065 "w_mbytes_per_sec": 0 00:11:24.065 }, 00:11:24.065 "claimed": true, 00:11:24.065 "claim_type": "exclusive_write", 00:11:24.065 "zoned": false, 00:11:24.065 "supported_io_types": { 00:11:24.065 "read": true, 00:11:24.065 "write": true, 00:11:24.065 "unmap": true, 00:11:24.065 "flush": true, 00:11:24.065 "reset": true, 00:11:24.065 "nvme_admin": false, 00:11:24.065 "nvme_io": false, 00:11:24.065 "nvme_io_md": false, 00:11:24.065 "write_zeroes": true, 00:11:24.065 "zcopy": true, 00:11:24.065 "get_zone_info": false, 00:11:24.065 "zone_management": false, 00:11:24.065 "zone_append": false, 00:11:24.065 "compare": false, 00:11:24.065 "compare_and_write": false, 00:11:24.065 "abort": true, 00:11:24.065 "seek_hole": false, 00:11:24.065 "seek_data": false, 00:11:24.065 "copy": true, 00:11:24.066 "nvme_iov_md": false 00:11:24.066 }, 00:11:24.066 "memory_domains": [ 00:11:24.066 { 00:11:24.066 "dma_device_id": "system", 00:11:24.066 "dma_device_type": 1 00:11:24.066 }, 00:11:24.066 { 00:11:24.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.066 "dma_device_type": 2 00:11:24.066 } 00:11:24.066 ], 00:11:24.066 "driver_specific": {} 00:11:24.066 } 00:11:24.066 ] 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.066 "name": "Existed_Raid", 00:11:24.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.066 "strip_size_kb": 64, 00:11:24.066 "state": "configuring", 00:11:24.066 "raid_level": "concat", 00:11:24.066 "superblock": false, 00:11:24.066 "num_base_bdevs": 4, 00:11:24.066 "num_base_bdevs_discovered": 2, 00:11:24.066 "num_base_bdevs_operational": 4, 00:11:24.066 "base_bdevs_list": [ 00:11:24.066 { 00:11:24.066 "name": "BaseBdev1", 00:11:24.066 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:24.066 "is_configured": true, 00:11:24.066 "data_offset": 0, 00:11:24.066 "data_size": 65536 00:11:24.066 }, 00:11:24.066 { 00:11:24.066 "name": "BaseBdev2", 00:11:24.066 "uuid": "e87e2d19-b90c-4a50-9cc2-91e1978f0284", 00:11:24.066 "is_configured": true, 00:11:24.066 "data_offset": 0, 00:11:24.066 "data_size": 65536 00:11:24.066 }, 00:11:24.066 { 00:11:24.066 "name": "BaseBdev3", 00:11:24.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.066 "is_configured": false, 00:11:24.066 "data_offset": 0, 00:11:24.066 "data_size": 0 00:11:24.066 }, 00:11:24.066 { 00:11:24.066 "name": "BaseBdev4", 00:11:24.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.066 "is_configured": false, 00:11:24.066 "data_offset": 0, 00:11:24.066 "data_size": 0 00:11:24.066 } 00:11:24.066 ] 00:11:24.066 }' 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.066 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.324 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.325 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.325 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.583 [2024-11-19 10:05:38.583626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.583 BaseBdev3 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.583 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.583 [ 00:11:24.583 { 00:11:24.583 "name": "BaseBdev3", 00:11:24.583 "aliases": [ 00:11:24.584 "adb3ae29-56f9-4ec3-9274-47bf59295ae8" 00:11:24.584 ], 00:11:24.584 "product_name": "Malloc disk", 00:11:24.584 "block_size": 512, 00:11:24.584 "num_blocks": 65536, 00:11:24.584 "uuid": "adb3ae29-56f9-4ec3-9274-47bf59295ae8", 00:11:24.584 "assigned_rate_limits": { 00:11:24.584 "rw_ios_per_sec": 0, 00:11:24.584 "rw_mbytes_per_sec": 0, 00:11:24.584 "r_mbytes_per_sec": 0, 00:11:24.584 "w_mbytes_per_sec": 0 00:11:24.584 }, 00:11:24.584 "claimed": true, 00:11:24.584 "claim_type": "exclusive_write", 00:11:24.584 "zoned": false, 00:11:24.584 "supported_io_types": { 00:11:24.584 "read": true, 00:11:24.584 "write": true, 00:11:24.584 "unmap": true, 00:11:24.584 "flush": true, 00:11:24.584 "reset": true, 00:11:24.584 "nvme_admin": false, 00:11:24.584 "nvme_io": false, 00:11:24.584 "nvme_io_md": false, 00:11:24.584 "write_zeroes": true, 00:11:24.584 "zcopy": true, 00:11:24.584 "get_zone_info": false, 00:11:24.584 "zone_management": false, 00:11:24.584 "zone_append": false, 00:11:24.584 "compare": false, 00:11:24.584 "compare_and_write": false, 00:11:24.584 "abort": true, 00:11:24.584 "seek_hole": false, 00:11:24.584 "seek_data": false, 00:11:24.584 "copy": true, 00:11:24.584 "nvme_iov_md": false 00:11:24.584 }, 00:11:24.584 "memory_domains": [ 00:11:24.584 { 00:11:24.584 "dma_device_id": "system", 00:11:24.584 "dma_device_type": 1 00:11:24.584 }, 00:11:24.584 { 00:11:24.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.584 "dma_device_type": 2 00:11:24.584 } 00:11:24.584 ], 00:11:24.584 "driver_specific": {} 00:11:24.584 } 00:11:24.584 ] 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.584 "name": "Existed_Raid", 00:11:24.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.584 "strip_size_kb": 64, 00:11:24.584 "state": "configuring", 00:11:24.584 "raid_level": "concat", 00:11:24.584 "superblock": false, 00:11:24.584 "num_base_bdevs": 4, 00:11:24.584 "num_base_bdevs_discovered": 3, 00:11:24.584 "num_base_bdevs_operational": 4, 00:11:24.584 "base_bdevs_list": [ 00:11:24.584 { 00:11:24.584 "name": "BaseBdev1", 00:11:24.584 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:24.584 "is_configured": true, 00:11:24.584 "data_offset": 0, 00:11:24.584 "data_size": 65536 00:11:24.584 }, 00:11:24.584 { 00:11:24.584 "name": "BaseBdev2", 00:11:24.584 "uuid": "e87e2d19-b90c-4a50-9cc2-91e1978f0284", 00:11:24.584 "is_configured": true, 00:11:24.584 "data_offset": 0, 00:11:24.584 "data_size": 65536 00:11:24.584 }, 00:11:24.584 { 00:11:24.584 "name": "BaseBdev3", 00:11:24.584 "uuid": "adb3ae29-56f9-4ec3-9274-47bf59295ae8", 00:11:24.584 "is_configured": true, 00:11:24.584 "data_offset": 0, 00:11:24.584 "data_size": 65536 00:11:24.584 }, 00:11:24.584 { 00:11:24.584 "name": "BaseBdev4", 00:11:24.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.584 "is_configured": false, 00:11:24.584 "data_offset": 0, 00:11:24.584 "data_size": 0 00:11:24.584 } 00:11:24.584 ] 00:11:24.584 }' 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.584 10:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.151 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:25.151 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.151 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.151 [2024-11-19 10:05:39.175378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.151 [2024-11-19 10:05:39.175769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:25.151 [2024-11-19 10:05:39.175809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:25.151 [2024-11-19 10:05:39.176201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.151 [2024-11-19 10:05:39.176441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:25.151 [2024-11-19 10:05:39.176466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:25.152 [2024-11-19 10:05:39.176844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.152 BaseBdev4 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 [ 00:11:25.152 { 00:11:25.152 "name": "BaseBdev4", 00:11:25.152 "aliases": [ 00:11:25.152 "7f4b190f-500c-4656-a8dd-32638b614897" 00:11:25.152 ], 00:11:25.152 "product_name": "Malloc disk", 00:11:25.152 "block_size": 512, 00:11:25.152 "num_blocks": 65536, 00:11:25.152 "uuid": "7f4b190f-500c-4656-a8dd-32638b614897", 00:11:25.152 "assigned_rate_limits": { 00:11:25.152 "rw_ios_per_sec": 0, 00:11:25.152 "rw_mbytes_per_sec": 0, 00:11:25.152 "r_mbytes_per_sec": 0, 00:11:25.152 "w_mbytes_per_sec": 0 00:11:25.152 }, 00:11:25.152 "claimed": true, 00:11:25.152 "claim_type": "exclusive_write", 00:11:25.152 "zoned": false, 00:11:25.152 "supported_io_types": { 00:11:25.152 "read": true, 00:11:25.152 "write": true, 00:11:25.152 "unmap": true, 00:11:25.152 "flush": true, 00:11:25.152 "reset": true, 00:11:25.152 "nvme_admin": false, 00:11:25.152 "nvme_io": false, 00:11:25.152 "nvme_io_md": false, 00:11:25.152 "write_zeroes": true, 00:11:25.152 "zcopy": true, 00:11:25.152 "get_zone_info": false, 00:11:25.152 "zone_management": false, 00:11:25.152 "zone_append": false, 00:11:25.152 "compare": false, 00:11:25.152 "compare_and_write": false, 00:11:25.152 "abort": true, 00:11:25.152 "seek_hole": false, 00:11:25.152 "seek_data": false, 00:11:25.152 "copy": true, 00:11:25.152 "nvme_iov_md": false 00:11:25.152 }, 00:11:25.152 "memory_domains": [ 00:11:25.152 { 00:11:25.152 "dma_device_id": "system", 00:11:25.152 "dma_device_type": 1 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.152 "dma_device_type": 2 00:11:25.152 } 00:11:25.152 ], 00:11:25.152 "driver_specific": {} 00:11:25.152 } 00:11:25.152 ] 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.152 "name": "Existed_Raid", 00:11:25.152 "uuid": "77313b20-b089-48c2-963d-9aded5ad3259", 00:11:25.152 "strip_size_kb": 64, 00:11:25.152 "state": "online", 00:11:25.152 "raid_level": "concat", 00:11:25.152 "superblock": false, 00:11:25.152 "num_base_bdevs": 4, 00:11:25.152 "num_base_bdevs_discovered": 4, 00:11:25.152 "num_base_bdevs_operational": 4, 00:11:25.152 "base_bdevs_list": [ 00:11:25.152 { 00:11:25.152 "name": "BaseBdev1", 00:11:25.152 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:25.152 "is_configured": true, 00:11:25.152 "data_offset": 0, 00:11:25.152 "data_size": 65536 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "name": "BaseBdev2", 00:11:25.152 "uuid": "e87e2d19-b90c-4a50-9cc2-91e1978f0284", 00:11:25.152 "is_configured": true, 00:11:25.152 "data_offset": 0, 00:11:25.152 "data_size": 65536 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "name": "BaseBdev3", 00:11:25.152 "uuid": "adb3ae29-56f9-4ec3-9274-47bf59295ae8", 00:11:25.152 "is_configured": true, 00:11:25.152 "data_offset": 0, 00:11:25.152 "data_size": 65536 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "name": "BaseBdev4", 00:11:25.152 "uuid": "7f4b190f-500c-4656-a8dd-32638b614897", 00:11:25.152 "is_configured": true, 00:11:25.152 "data_offset": 0, 00:11:25.152 "data_size": 65536 00:11:25.152 } 00:11:25.152 ] 00:11:25.152 }' 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.152 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.719 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.719 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:25.719 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.719 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.720 [2024-11-19 10:05:39.824169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.720 "name": "Existed_Raid", 00:11:25.720 "aliases": [ 00:11:25.720 "77313b20-b089-48c2-963d-9aded5ad3259" 00:11:25.720 ], 00:11:25.720 "product_name": "Raid Volume", 00:11:25.720 "block_size": 512, 00:11:25.720 "num_blocks": 262144, 00:11:25.720 "uuid": "77313b20-b089-48c2-963d-9aded5ad3259", 00:11:25.720 "assigned_rate_limits": { 00:11:25.720 "rw_ios_per_sec": 0, 00:11:25.720 "rw_mbytes_per_sec": 0, 00:11:25.720 "r_mbytes_per_sec": 0, 00:11:25.720 "w_mbytes_per_sec": 0 00:11:25.720 }, 00:11:25.720 "claimed": false, 00:11:25.720 "zoned": false, 00:11:25.720 "supported_io_types": { 00:11:25.720 "read": true, 00:11:25.720 "write": true, 00:11:25.720 "unmap": true, 00:11:25.720 "flush": true, 00:11:25.720 "reset": true, 00:11:25.720 "nvme_admin": false, 00:11:25.720 "nvme_io": false, 00:11:25.720 "nvme_io_md": false, 00:11:25.720 "write_zeroes": true, 00:11:25.720 "zcopy": false, 00:11:25.720 "get_zone_info": false, 00:11:25.720 "zone_management": false, 00:11:25.720 "zone_append": false, 00:11:25.720 "compare": false, 00:11:25.720 "compare_and_write": false, 00:11:25.720 "abort": false, 00:11:25.720 "seek_hole": false, 00:11:25.720 "seek_data": false, 00:11:25.720 "copy": false, 00:11:25.720 "nvme_iov_md": false 00:11:25.720 }, 00:11:25.720 "memory_domains": [ 00:11:25.720 { 00:11:25.720 "dma_device_id": "system", 00:11:25.720 "dma_device_type": 1 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.720 "dma_device_type": 2 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "system", 00:11:25.720 "dma_device_type": 1 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.720 "dma_device_type": 2 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "system", 00:11:25.720 "dma_device_type": 1 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.720 "dma_device_type": 2 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "system", 00:11:25.720 "dma_device_type": 1 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.720 "dma_device_type": 2 00:11:25.720 } 00:11:25.720 ], 00:11:25.720 "driver_specific": { 00:11:25.720 "raid": { 00:11:25.720 "uuid": "77313b20-b089-48c2-963d-9aded5ad3259", 00:11:25.720 "strip_size_kb": 64, 00:11:25.720 "state": "online", 00:11:25.720 "raid_level": "concat", 00:11:25.720 "superblock": false, 00:11:25.720 "num_base_bdevs": 4, 00:11:25.720 "num_base_bdevs_discovered": 4, 00:11:25.720 "num_base_bdevs_operational": 4, 00:11:25.720 "base_bdevs_list": [ 00:11:25.720 { 00:11:25.720 "name": "BaseBdev1", 00:11:25.720 "uuid": "271debbc-72a4-4eb3-9ea6-a69eb024ba26", 00:11:25.720 "is_configured": true, 00:11:25.720 "data_offset": 0, 00:11:25.720 "data_size": 65536 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "name": "BaseBdev2", 00:11:25.720 "uuid": "e87e2d19-b90c-4a50-9cc2-91e1978f0284", 00:11:25.720 "is_configured": true, 00:11:25.720 "data_offset": 0, 00:11:25.720 "data_size": 65536 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "name": "BaseBdev3", 00:11:25.720 "uuid": "adb3ae29-56f9-4ec3-9274-47bf59295ae8", 00:11:25.720 "is_configured": true, 00:11:25.720 "data_offset": 0, 00:11:25.720 "data_size": 65536 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "name": "BaseBdev4", 00:11:25.720 "uuid": "7f4b190f-500c-4656-a8dd-32638b614897", 00:11:25.720 "is_configured": true, 00:11:25.720 "data_offset": 0, 00:11:25.720 "data_size": 65536 00:11:25.720 } 00:11:25.720 ] 00:11:25.720 } 00:11:25.720 } 00:11:25.720 }' 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:25.720 BaseBdev2 00:11:25.720 BaseBdev3 00:11:25.720 BaseBdev4' 00:11:25.720 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.980 10:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.980 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.980 [2024-11-19 10:05:40.187841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.980 [2024-11-19 10:05:40.187896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.980 [2024-11-19 10:05:40.187985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.239 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.239 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.240 "name": "Existed_Raid", 00:11:26.240 "uuid": "77313b20-b089-48c2-963d-9aded5ad3259", 00:11:26.240 "strip_size_kb": 64, 00:11:26.240 "state": "offline", 00:11:26.240 "raid_level": "concat", 00:11:26.240 "superblock": false, 00:11:26.240 "num_base_bdevs": 4, 00:11:26.240 "num_base_bdevs_discovered": 3, 00:11:26.240 "num_base_bdevs_operational": 3, 00:11:26.240 "base_bdevs_list": [ 00:11:26.240 { 00:11:26.240 "name": null, 00:11:26.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.240 "is_configured": false, 00:11:26.240 "data_offset": 0, 00:11:26.240 "data_size": 65536 00:11:26.240 }, 00:11:26.240 { 00:11:26.240 "name": "BaseBdev2", 00:11:26.240 "uuid": "e87e2d19-b90c-4a50-9cc2-91e1978f0284", 00:11:26.240 "is_configured": true, 00:11:26.240 "data_offset": 0, 00:11:26.240 "data_size": 65536 00:11:26.240 }, 00:11:26.240 { 00:11:26.240 "name": "BaseBdev3", 00:11:26.240 "uuid": "adb3ae29-56f9-4ec3-9274-47bf59295ae8", 00:11:26.240 "is_configured": true, 00:11:26.240 "data_offset": 0, 00:11:26.240 "data_size": 65536 00:11:26.240 }, 00:11:26.240 { 00:11:26.240 "name": "BaseBdev4", 00:11:26.240 "uuid": "7f4b190f-500c-4656-a8dd-32638b614897", 00:11:26.240 "is_configured": true, 00:11:26.240 "data_offset": 0, 00:11:26.240 "data_size": 65536 00:11:26.240 } 00:11:26.240 ] 00:11:26.240 }' 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.240 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:26.807 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.808 10:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.808 [2024-11-19 10:05:40.914405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.808 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.067 [2024-11-19 10:05:41.084131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.067 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.067 [2024-11-19 10:05:41.239513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:27.067 [2024-11-19 10:05:41.239588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.326 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.327 BaseBdev2 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.327 [ 00:11:27.327 { 00:11:27.327 "name": "BaseBdev2", 00:11:27.327 "aliases": [ 00:11:27.327 "01ef3561-8da2-4b00-b47b-b619f6940f0f" 00:11:27.327 ], 00:11:27.327 "product_name": "Malloc disk", 00:11:27.327 "block_size": 512, 00:11:27.327 "num_blocks": 65536, 00:11:27.327 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:27.327 "assigned_rate_limits": { 00:11:27.327 "rw_ios_per_sec": 0, 00:11:27.327 "rw_mbytes_per_sec": 0, 00:11:27.327 "r_mbytes_per_sec": 0, 00:11:27.327 "w_mbytes_per_sec": 0 00:11:27.327 }, 00:11:27.327 "claimed": false, 00:11:27.327 "zoned": false, 00:11:27.327 "supported_io_types": { 00:11:27.327 "read": true, 00:11:27.327 "write": true, 00:11:27.327 "unmap": true, 00:11:27.327 "flush": true, 00:11:27.327 "reset": true, 00:11:27.327 "nvme_admin": false, 00:11:27.327 "nvme_io": false, 00:11:27.327 "nvme_io_md": false, 00:11:27.327 "write_zeroes": true, 00:11:27.327 "zcopy": true, 00:11:27.327 "get_zone_info": false, 00:11:27.327 "zone_management": false, 00:11:27.327 "zone_append": false, 00:11:27.327 "compare": false, 00:11:27.327 "compare_and_write": false, 00:11:27.327 "abort": true, 00:11:27.327 "seek_hole": false, 00:11:27.327 "seek_data": false, 00:11:27.327 "copy": true, 00:11:27.327 "nvme_iov_md": false 00:11:27.327 }, 00:11:27.327 "memory_domains": [ 00:11:27.327 { 00:11:27.327 "dma_device_id": "system", 00:11:27.327 "dma_device_type": 1 00:11:27.327 }, 00:11:27.327 { 00:11:27.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.327 "dma_device_type": 2 00:11:27.327 } 00:11:27.327 ], 00:11:27.327 "driver_specific": {} 00:11:27.327 } 00:11:27.327 ] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.327 BaseBdev3 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.327 [ 00:11:27.327 { 00:11:27.327 "name": "BaseBdev3", 00:11:27.327 "aliases": [ 00:11:27.327 "c3cac15c-d472-4d91-a1c1-4a1c14b7f826" 00:11:27.327 ], 00:11:27.327 "product_name": "Malloc disk", 00:11:27.327 "block_size": 512, 00:11:27.327 "num_blocks": 65536, 00:11:27.327 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:27.327 "assigned_rate_limits": { 00:11:27.327 "rw_ios_per_sec": 0, 00:11:27.327 "rw_mbytes_per_sec": 0, 00:11:27.327 "r_mbytes_per_sec": 0, 00:11:27.327 "w_mbytes_per_sec": 0 00:11:27.327 }, 00:11:27.327 "claimed": false, 00:11:27.327 "zoned": false, 00:11:27.327 "supported_io_types": { 00:11:27.327 "read": true, 00:11:27.327 "write": true, 00:11:27.327 "unmap": true, 00:11:27.327 "flush": true, 00:11:27.327 "reset": true, 00:11:27.327 "nvme_admin": false, 00:11:27.327 "nvme_io": false, 00:11:27.327 "nvme_io_md": false, 00:11:27.327 "write_zeroes": true, 00:11:27.327 "zcopy": true, 00:11:27.327 "get_zone_info": false, 00:11:27.327 "zone_management": false, 00:11:27.327 "zone_append": false, 00:11:27.327 "compare": false, 00:11:27.327 "compare_and_write": false, 00:11:27.327 "abort": true, 00:11:27.327 "seek_hole": false, 00:11:27.327 "seek_data": false, 00:11:27.327 "copy": true, 00:11:27.327 "nvme_iov_md": false 00:11:27.327 }, 00:11:27.327 "memory_domains": [ 00:11:27.327 { 00:11:27.327 "dma_device_id": "system", 00:11:27.327 "dma_device_type": 1 00:11:27.327 }, 00:11:27.327 { 00:11:27.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.327 "dma_device_type": 2 00:11:27.327 } 00:11:27.327 ], 00:11:27.327 "driver_specific": {} 00:11:27.327 } 00:11:27.327 ] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.327 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.587 BaseBdev4 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.587 [ 00:11:27.587 { 00:11:27.587 "name": "BaseBdev4", 00:11:27.587 "aliases": [ 00:11:27.587 "a375b46c-b91b-459c-b8e2-d13a87f39a88" 00:11:27.587 ], 00:11:27.587 "product_name": "Malloc disk", 00:11:27.587 "block_size": 512, 00:11:27.587 "num_blocks": 65536, 00:11:27.587 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:27.587 "assigned_rate_limits": { 00:11:27.587 "rw_ios_per_sec": 0, 00:11:27.587 "rw_mbytes_per_sec": 0, 00:11:27.587 "r_mbytes_per_sec": 0, 00:11:27.587 "w_mbytes_per_sec": 0 00:11:27.587 }, 00:11:27.587 "claimed": false, 00:11:27.587 "zoned": false, 00:11:27.587 "supported_io_types": { 00:11:27.587 "read": true, 00:11:27.587 "write": true, 00:11:27.587 "unmap": true, 00:11:27.587 "flush": true, 00:11:27.587 "reset": true, 00:11:27.587 "nvme_admin": false, 00:11:27.587 "nvme_io": false, 00:11:27.587 "nvme_io_md": false, 00:11:27.587 "write_zeroes": true, 00:11:27.587 "zcopy": true, 00:11:27.587 "get_zone_info": false, 00:11:27.587 "zone_management": false, 00:11:27.587 "zone_append": false, 00:11:27.587 "compare": false, 00:11:27.587 "compare_and_write": false, 00:11:27.587 "abort": true, 00:11:27.587 "seek_hole": false, 00:11:27.587 "seek_data": false, 00:11:27.587 "copy": true, 00:11:27.587 "nvme_iov_md": false 00:11:27.587 }, 00:11:27.587 "memory_domains": [ 00:11:27.587 { 00:11:27.587 "dma_device_id": "system", 00:11:27.587 "dma_device_type": 1 00:11:27.587 }, 00:11:27.587 { 00:11:27.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.587 "dma_device_type": 2 00:11:27.587 } 00:11:27.587 ], 00:11:27.587 "driver_specific": {} 00:11:27.587 } 00:11:27.587 ] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.587 [2024-11-19 10:05:41.635128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.587 [2024-11-19 10:05:41.635347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.587 [2024-11-19 10:05:41.635510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.587 [2024-11-19 10:05:41.638428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.587 [2024-11-19 10:05:41.638648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.587 "name": "Existed_Raid", 00:11:27.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.587 "strip_size_kb": 64, 00:11:27.587 "state": "configuring", 00:11:27.587 "raid_level": "concat", 00:11:27.587 "superblock": false, 00:11:27.587 "num_base_bdevs": 4, 00:11:27.587 "num_base_bdevs_discovered": 3, 00:11:27.587 "num_base_bdevs_operational": 4, 00:11:27.587 "base_bdevs_list": [ 00:11:27.587 { 00:11:27.587 "name": "BaseBdev1", 00:11:27.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.587 "is_configured": false, 00:11:27.587 "data_offset": 0, 00:11:27.587 "data_size": 0 00:11:27.587 }, 00:11:27.587 { 00:11:27.587 "name": "BaseBdev2", 00:11:27.587 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:27.587 "is_configured": true, 00:11:27.587 "data_offset": 0, 00:11:27.587 "data_size": 65536 00:11:27.587 }, 00:11:27.587 { 00:11:27.587 "name": "BaseBdev3", 00:11:27.587 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:27.587 "is_configured": true, 00:11:27.587 "data_offset": 0, 00:11:27.587 "data_size": 65536 00:11:27.587 }, 00:11:27.587 { 00:11:27.587 "name": "BaseBdev4", 00:11:27.587 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:27.587 "is_configured": true, 00:11:27.587 "data_offset": 0, 00:11:27.587 "data_size": 65536 00:11:27.587 } 00:11:27.587 ] 00:11:27.587 }' 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.587 10:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 [2024-11-19 10:05:42.143258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.238 "name": "Existed_Raid", 00:11:28.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.238 "strip_size_kb": 64, 00:11:28.238 "state": "configuring", 00:11:28.238 "raid_level": "concat", 00:11:28.238 "superblock": false, 00:11:28.238 "num_base_bdevs": 4, 00:11:28.238 "num_base_bdevs_discovered": 2, 00:11:28.238 "num_base_bdevs_operational": 4, 00:11:28.238 "base_bdevs_list": [ 00:11:28.238 { 00:11:28.238 "name": "BaseBdev1", 00:11:28.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.238 "is_configured": false, 00:11:28.238 "data_offset": 0, 00:11:28.238 "data_size": 0 00:11:28.238 }, 00:11:28.238 { 00:11:28.238 "name": null, 00:11:28.238 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:28.238 "is_configured": false, 00:11:28.238 "data_offset": 0, 00:11:28.238 "data_size": 65536 00:11:28.238 }, 00:11:28.238 { 00:11:28.238 "name": "BaseBdev3", 00:11:28.238 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:28.238 "is_configured": true, 00:11:28.238 "data_offset": 0, 00:11:28.238 "data_size": 65536 00:11:28.238 }, 00:11:28.238 { 00:11:28.238 "name": "BaseBdev4", 00:11:28.238 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:28.238 "is_configured": true, 00:11:28.238 "data_offset": 0, 00:11:28.238 "data_size": 65536 00:11:28.238 } 00:11:28.238 ] 00:11:28.238 }' 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.238 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.497 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.497 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.497 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.497 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.497 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.758 [2024-11-19 10:05:42.781169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.758 BaseBdev1 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.758 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.758 [ 00:11:28.758 { 00:11:28.758 "name": "BaseBdev1", 00:11:28.758 "aliases": [ 00:11:28.758 "3b2aa23e-4fc6-4943-80c7-f789dafb94d5" 00:11:28.758 ], 00:11:28.758 "product_name": "Malloc disk", 00:11:28.758 "block_size": 512, 00:11:28.758 "num_blocks": 65536, 00:11:28.758 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:28.758 "assigned_rate_limits": { 00:11:28.758 "rw_ios_per_sec": 0, 00:11:28.758 "rw_mbytes_per_sec": 0, 00:11:28.758 "r_mbytes_per_sec": 0, 00:11:28.758 "w_mbytes_per_sec": 0 00:11:28.758 }, 00:11:28.758 "claimed": true, 00:11:28.758 "claim_type": "exclusive_write", 00:11:28.758 "zoned": false, 00:11:28.758 "supported_io_types": { 00:11:28.758 "read": true, 00:11:28.758 "write": true, 00:11:28.758 "unmap": true, 00:11:28.758 "flush": true, 00:11:28.758 "reset": true, 00:11:28.758 "nvme_admin": false, 00:11:28.758 "nvme_io": false, 00:11:28.758 "nvme_io_md": false, 00:11:28.758 "write_zeroes": true, 00:11:28.758 "zcopy": true, 00:11:28.759 "get_zone_info": false, 00:11:28.759 "zone_management": false, 00:11:28.759 "zone_append": false, 00:11:28.759 "compare": false, 00:11:28.759 "compare_and_write": false, 00:11:28.759 "abort": true, 00:11:28.759 "seek_hole": false, 00:11:28.759 "seek_data": false, 00:11:28.759 "copy": true, 00:11:28.759 "nvme_iov_md": false 00:11:28.759 }, 00:11:28.759 "memory_domains": [ 00:11:28.759 { 00:11:28.759 "dma_device_id": "system", 00:11:28.759 "dma_device_type": 1 00:11:28.759 }, 00:11:28.759 { 00:11:28.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.759 "dma_device_type": 2 00:11:28.759 } 00:11:28.759 ], 00:11:28.759 "driver_specific": {} 00:11:28.759 } 00:11:28.759 ] 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.759 "name": "Existed_Raid", 00:11:28.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.759 "strip_size_kb": 64, 00:11:28.759 "state": "configuring", 00:11:28.759 "raid_level": "concat", 00:11:28.759 "superblock": false, 00:11:28.759 "num_base_bdevs": 4, 00:11:28.759 "num_base_bdevs_discovered": 3, 00:11:28.759 "num_base_bdevs_operational": 4, 00:11:28.759 "base_bdevs_list": [ 00:11:28.759 { 00:11:28.759 "name": "BaseBdev1", 00:11:28.759 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:28.759 "is_configured": true, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 65536 00:11:28.759 }, 00:11:28.759 { 00:11:28.759 "name": null, 00:11:28.759 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:28.759 "is_configured": false, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 65536 00:11:28.759 }, 00:11:28.759 { 00:11:28.759 "name": "BaseBdev3", 00:11:28.759 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:28.759 "is_configured": true, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 65536 00:11:28.759 }, 00:11:28.759 { 00:11:28.759 "name": "BaseBdev4", 00:11:28.759 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:28.759 "is_configured": true, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 65536 00:11:28.759 } 00:11:28.759 ] 00:11:28.759 }' 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.759 10:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 [2024-11-19 10:05:43.401478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.329 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.329 "name": "Existed_Raid", 00:11:29.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.329 "strip_size_kb": 64, 00:11:29.329 "state": "configuring", 00:11:29.329 "raid_level": "concat", 00:11:29.329 "superblock": false, 00:11:29.329 "num_base_bdevs": 4, 00:11:29.329 "num_base_bdevs_discovered": 2, 00:11:29.329 "num_base_bdevs_operational": 4, 00:11:29.329 "base_bdevs_list": [ 00:11:29.329 { 00:11:29.329 "name": "BaseBdev1", 00:11:29.329 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:29.329 "is_configured": true, 00:11:29.329 "data_offset": 0, 00:11:29.329 "data_size": 65536 00:11:29.329 }, 00:11:29.329 { 00:11:29.329 "name": null, 00:11:29.329 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:29.329 "is_configured": false, 00:11:29.329 "data_offset": 0, 00:11:29.329 "data_size": 65536 00:11:29.329 }, 00:11:29.329 { 00:11:29.330 "name": null, 00:11:29.330 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:29.330 "is_configured": false, 00:11:29.330 "data_offset": 0, 00:11:29.330 "data_size": 65536 00:11:29.330 }, 00:11:29.330 { 00:11:29.330 "name": "BaseBdev4", 00:11:29.330 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:29.330 "is_configured": true, 00:11:29.330 "data_offset": 0, 00:11:29.330 "data_size": 65536 00:11:29.330 } 00:11:29.330 ] 00:11:29.330 }' 00:11:29.330 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.330 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.898 [2024-11-19 10:05:43.985628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.898 10:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.898 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.898 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.898 "name": "Existed_Raid", 00:11:29.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.898 "strip_size_kb": 64, 00:11:29.898 "state": "configuring", 00:11:29.898 "raid_level": "concat", 00:11:29.898 "superblock": false, 00:11:29.898 "num_base_bdevs": 4, 00:11:29.898 "num_base_bdevs_discovered": 3, 00:11:29.898 "num_base_bdevs_operational": 4, 00:11:29.898 "base_bdevs_list": [ 00:11:29.898 { 00:11:29.898 "name": "BaseBdev1", 00:11:29.898 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:29.898 "is_configured": true, 00:11:29.898 "data_offset": 0, 00:11:29.898 "data_size": 65536 00:11:29.898 }, 00:11:29.898 { 00:11:29.898 "name": null, 00:11:29.898 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:29.898 "is_configured": false, 00:11:29.898 "data_offset": 0, 00:11:29.898 "data_size": 65536 00:11:29.898 }, 00:11:29.898 { 00:11:29.898 "name": "BaseBdev3", 00:11:29.898 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:29.898 "is_configured": true, 00:11:29.898 "data_offset": 0, 00:11:29.898 "data_size": 65536 00:11:29.898 }, 00:11:29.898 { 00:11:29.898 "name": "BaseBdev4", 00:11:29.898 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:29.898 "is_configured": true, 00:11:29.898 "data_offset": 0, 00:11:29.898 "data_size": 65536 00:11:29.898 } 00:11:29.898 ] 00:11:29.898 }' 00:11:29.898 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.898 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 [2024-11-19 10:05:44.557903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.466 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.725 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.725 "name": "Existed_Raid", 00:11:30.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.725 "strip_size_kb": 64, 00:11:30.725 "state": "configuring", 00:11:30.725 "raid_level": "concat", 00:11:30.725 "superblock": false, 00:11:30.725 "num_base_bdevs": 4, 00:11:30.725 "num_base_bdevs_discovered": 2, 00:11:30.725 "num_base_bdevs_operational": 4, 00:11:30.725 "base_bdevs_list": [ 00:11:30.725 { 00:11:30.725 "name": null, 00:11:30.725 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:30.725 "is_configured": false, 00:11:30.725 "data_offset": 0, 00:11:30.725 "data_size": 65536 00:11:30.725 }, 00:11:30.725 { 00:11:30.725 "name": null, 00:11:30.725 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:30.725 "is_configured": false, 00:11:30.725 "data_offset": 0, 00:11:30.725 "data_size": 65536 00:11:30.725 }, 00:11:30.725 { 00:11:30.725 "name": "BaseBdev3", 00:11:30.725 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:30.725 "is_configured": true, 00:11:30.725 "data_offset": 0, 00:11:30.725 "data_size": 65536 00:11:30.725 }, 00:11:30.725 { 00:11:30.725 "name": "BaseBdev4", 00:11:30.725 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:30.725 "is_configured": true, 00:11:30.725 "data_offset": 0, 00:11:30.725 "data_size": 65536 00:11:30.725 } 00:11:30.725 ] 00:11:30.725 }' 00:11:30.725 10:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.725 10:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.984 [2024-11-19 10:05:45.161236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.984 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.243 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.243 "name": "Existed_Raid", 00:11:31.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.243 "strip_size_kb": 64, 00:11:31.243 "state": "configuring", 00:11:31.243 "raid_level": "concat", 00:11:31.243 "superblock": false, 00:11:31.243 "num_base_bdevs": 4, 00:11:31.243 "num_base_bdevs_discovered": 3, 00:11:31.243 "num_base_bdevs_operational": 4, 00:11:31.243 "base_bdevs_list": [ 00:11:31.243 { 00:11:31.243 "name": null, 00:11:31.243 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:31.243 "is_configured": false, 00:11:31.243 "data_offset": 0, 00:11:31.243 "data_size": 65536 00:11:31.243 }, 00:11:31.243 { 00:11:31.243 "name": "BaseBdev2", 00:11:31.243 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:31.243 "is_configured": true, 00:11:31.243 "data_offset": 0, 00:11:31.243 "data_size": 65536 00:11:31.243 }, 00:11:31.243 { 00:11:31.243 "name": "BaseBdev3", 00:11:31.243 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:31.243 "is_configured": true, 00:11:31.243 "data_offset": 0, 00:11:31.243 "data_size": 65536 00:11:31.243 }, 00:11:31.243 { 00:11:31.243 "name": "BaseBdev4", 00:11:31.243 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:31.243 "is_configured": true, 00:11:31.243 "data_offset": 0, 00:11:31.243 "data_size": 65536 00:11:31.243 } 00:11:31.243 ] 00:11:31.243 }' 00:11:31.243 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.243 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b2aa23e-4fc6-4943-80c7-f789dafb94d5 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.561 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.820 [2024-11-19 10:05:45.794499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:31.820 [2024-11-19 10:05:45.794899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.820 [2024-11-19 10:05:45.794925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:31.820 [2024-11-19 10:05:45.795303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:31.820 [2024-11-19 10:05:45.795508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.820 [2024-11-19 10:05:45.795531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:31.820 [2024-11-19 10:05:45.795908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.820 NewBaseBdev 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.820 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.820 [ 00:11:31.820 { 00:11:31.820 "name": "NewBaseBdev", 00:11:31.820 "aliases": [ 00:11:31.820 "3b2aa23e-4fc6-4943-80c7-f789dafb94d5" 00:11:31.820 ], 00:11:31.820 "product_name": "Malloc disk", 00:11:31.820 "block_size": 512, 00:11:31.820 "num_blocks": 65536, 00:11:31.820 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:31.820 "assigned_rate_limits": { 00:11:31.821 "rw_ios_per_sec": 0, 00:11:31.821 "rw_mbytes_per_sec": 0, 00:11:31.821 "r_mbytes_per_sec": 0, 00:11:31.821 "w_mbytes_per_sec": 0 00:11:31.821 }, 00:11:31.821 "claimed": true, 00:11:31.821 "claim_type": "exclusive_write", 00:11:31.821 "zoned": false, 00:11:31.821 "supported_io_types": { 00:11:31.821 "read": true, 00:11:31.821 "write": true, 00:11:31.821 "unmap": true, 00:11:31.821 "flush": true, 00:11:31.821 "reset": true, 00:11:31.821 "nvme_admin": false, 00:11:31.821 "nvme_io": false, 00:11:31.821 "nvme_io_md": false, 00:11:31.821 "write_zeroes": true, 00:11:31.821 "zcopy": true, 00:11:31.821 "get_zone_info": false, 00:11:31.821 "zone_management": false, 00:11:31.821 "zone_append": false, 00:11:31.821 "compare": false, 00:11:31.821 "compare_and_write": false, 00:11:31.821 "abort": true, 00:11:31.821 "seek_hole": false, 00:11:31.821 "seek_data": false, 00:11:31.821 "copy": true, 00:11:31.821 "nvme_iov_md": false 00:11:31.821 }, 00:11:31.821 "memory_domains": [ 00:11:31.821 { 00:11:31.821 "dma_device_id": "system", 00:11:31.821 "dma_device_type": 1 00:11:31.821 }, 00:11:31.821 { 00:11:31.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.821 "dma_device_type": 2 00:11:31.821 } 00:11:31.821 ], 00:11:31.821 "driver_specific": {} 00:11:31.821 } 00:11:31.821 ] 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.821 "name": "Existed_Raid", 00:11:31.821 "uuid": "d05adf04-94eb-4816-a102-13b6215ae2d5", 00:11:31.821 "strip_size_kb": 64, 00:11:31.821 "state": "online", 00:11:31.821 "raid_level": "concat", 00:11:31.821 "superblock": false, 00:11:31.821 "num_base_bdevs": 4, 00:11:31.821 "num_base_bdevs_discovered": 4, 00:11:31.821 "num_base_bdevs_operational": 4, 00:11:31.821 "base_bdevs_list": [ 00:11:31.821 { 00:11:31.821 "name": "NewBaseBdev", 00:11:31.821 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:31.821 "is_configured": true, 00:11:31.821 "data_offset": 0, 00:11:31.821 "data_size": 65536 00:11:31.821 }, 00:11:31.821 { 00:11:31.821 "name": "BaseBdev2", 00:11:31.821 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:31.821 "is_configured": true, 00:11:31.821 "data_offset": 0, 00:11:31.821 "data_size": 65536 00:11:31.821 }, 00:11:31.821 { 00:11:31.821 "name": "BaseBdev3", 00:11:31.821 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:31.821 "is_configured": true, 00:11:31.821 "data_offset": 0, 00:11:31.821 "data_size": 65536 00:11:31.821 }, 00:11:31.821 { 00:11:31.821 "name": "BaseBdev4", 00:11:31.821 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:31.821 "is_configured": true, 00:11:31.821 "data_offset": 0, 00:11:31.821 "data_size": 65536 00:11:31.821 } 00:11:31.821 ] 00:11:31.821 }' 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.821 10:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.080 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.080 [2024-11-19 10:05:46.311228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.340 "name": "Existed_Raid", 00:11:32.340 "aliases": [ 00:11:32.340 "d05adf04-94eb-4816-a102-13b6215ae2d5" 00:11:32.340 ], 00:11:32.340 "product_name": "Raid Volume", 00:11:32.340 "block_size": 512, 00:11:32.340 "num_blocks": 262144, 00:11:32.340 "uuid": "d05adf04-94eb-4816-a102-13b6215ae2d5", 00:11:32.340 "assigned_rate_limits": { 00:11:32.340 "rw_ios_per_sec": 0, 00:11:32.340 "rw_mbytes_per_sec": 0, 00:11:32.340 "r_mbytes_per_sec": 0, 00:11:32.340 "w_mbytes_per_sec": 0 00:11:32.340 }, 00:11:32.340 "claimed": false, 00:11:32.340 "zoned": false, 00:11:32.340 "supported_io_types": { 00:11:32.340 "read": true, 00:11:32.340 "write": true, 00:11:32.340 "unmap": true, 00:11:32.340 "flush": true, 00:11:32.340 "reset": true, 00:11:32.340 "nvme_admin": false, 00:11:32.340 "nvme_io": false, 00:11:32.340 "nvme_io_md": false, 00:11:32.340 "write_zeroes": true, 00:11:32.340 "zcopy": false, 00:11:32.340 "get_zone_info": false, 00:11:32.340 "zone_management": false, 00:11:32.340 "zone_append": false, 00:11:32.340 "compare": false, 00:11:32.340 "compare_and_write": false, 00:11:32.340 "abort": false, 00:11:32.340 "seek_hole": false, 00:11:32.340 "seek_data": false, 00:11:32.340 "copy": false, 00:11:32.340 "nvme_iov_md": false 00:11:32.340 }, 00:11:32.340 "memory_domains": [ 00:11:32.340 { 00:11:32.340 "dma_device_id": "system", 00:11:32.340 "dma_device_type": 1 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.340 "dma_device_type": 2 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "system", 00:11:32.340 "dma_device_type": 1 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.340 "dma_device_type": 2 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "system", 00:11:32.340 "dma_device_type": 1 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.340 "dma_device_type": 2 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "system", 00:11:32.340 "dma_device_type": 1 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.340 "dma_device_type": 2 00:11:32.340 } 00:11:32.340 ], 00:11:32.340 "driver_specific": { 00:11:32.340 "raid": { 00:11:32.340 "uuid": "d05adf04-94eb-4816-a102-13b6215ae2d5", 00:11:32.340 "strip_size_kb": 64, 00:11:32.340 "state": "online", 00:11:32.340 "raid_level": "concat", 00:11:32.340 "superblock": false, 00:11:32.340 "num_base_bdevs": 4, 00:11:32.340 "num_base_bdevs_discovered": 4, 00:11:32.340 "num_base_bdevs_operational": 4, 00:11:32.340 "base_bdevs_list": [ 00:11:32.340 { 00:11:32.340 "name": "NewBaseBdev", 00:11:32.340 "uuid": "3b2aa23e-4fc6-4943-80c7-f789dafb94d5", 00:11:32.340 "is_configured": true, 00:11:32.340 "data_offset": 0, 00:11:32.340 "data_size": 65536 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "name": "BaseBdev2", 00:11:32.340 "uuid": "01ef3561-8da2-4b00-b47b-b619f6940f0f", 00:11:32.340 "is_configured": true, 00:11:32.340 "data_offset": 0, 00:11:32.340 "data_size": 65536 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "name": "BaseBdev3", 00:11:32.340 "uuid": "c3cac15c-d472-4d91-a1c1-4a1c14b7f826", 00:11:32.340 "is_configured": true, 00:11:32.340 "data_offset": 0, 00:11:32.340 "data_size": 65536 00:11:32.340 }, 00:11:32.340 { 00:11:32.340 "name": "BaseBdev4", 00:11:32.340 "uuid": "a375b46c-b91b-459c-b8e2-d13a87f39a88", 00:11:32.340 "is_configured": true, 00:11:32.340 "data_offset": 0, 00:11:32.340 "data_size": 65536 00:11:32.340 } 00:11:32.340 ] 00:11:32.340 } 00:11:32.340 } 00:11:32.340 }' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.340 BaseBdev2 00:11:32.340 BaseBdev3 00:11:32.340 BaseBdev4' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.340 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.599 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 [2024-11-19 10:05:46.698829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.600 [2024-11-19 10:05:46.698874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.600 [2024-11-19 10:05:46.698998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.600 [2024-11-19 10:05:46.699102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.600 [2024-11-19 10:05:46.699120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71275 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71275 ']' 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71275 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71275 00:11:32.600 killing process with pid 71275 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71275' 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71275 00:11:32.600 [2024-11-19 10:05:46.739975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.600 10:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71275 00:11:33.167 [2024-11-19 10:05:47.126626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.104 00:11:34.104 real 0m13.168s 00:11:34.104 user 0m21.596s 00:11:34.104 sys 0m1.893s 00:11:34.104 ************************************ 00:11:34.104 END TEST raid_state_function_test 00:11:34.104 ************************************ 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.104 10:05:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:34.104 10:05:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.104 10:05:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.104 10:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.104 ************************************ 00:11:34.104 START TEST raid_state_function_test_sb 00:11:34.104 ************************************ 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71958 00:11:34.104 Process raid pid: 71958 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71958' 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71958 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71958 ']' 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.104 10:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.363 [2024-11-19 10:05:48.417710] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:34.363 [2024-11-19 10:05:48.417980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.622 [2024-11-19 10:05:48.606752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.622 [2024-11-19 10:05:48.792951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.881 [2024-11-19 10:05:49.043819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.881 [2024-11-19 10:05:49.043906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.450 [2024-11-19 10:05:49.470622] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.450 [2024-11-19 10:05:49.470702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.450 [2024-11-19 10:05:49.470720] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.450 [2024-11-19 10:05:49.470737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.450 [2024-11-19 10:05:49.470747] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.450 [2024-11-19 10:05:49.470761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.450 [2024-11-19 10:05:49.470771] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.450 [2024-11-19 10:05:49.470800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.450 "name": "Existed_Raid", 00:11:35.450 "uuid": "f13bdcd5-2a48-48ba-90c4-9d85b806c56e", 00:11:35.450 "strip_size_kb": 64, 00:11:35.450 "state": "configuring", 00:11:35.450 "raid_level": "concat", 00:11:35.450 "superblock": true, 00:11:35.450 "num_base_bdevs": 4, 00:11:35.450 "num_base_bdevs_discovered": 0, 00:11:35.450 "num_base_bdevs_operational": 4, 00:11:35.450 "base_bdevs_list": [ 00:11:35.450 { 00:11:35.450 "name": "BaseBdev1", 00:11:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.450 "is_configured": false, 00:11:35.450 "data_offset": 0, 00:11:35.450 "data_size": 0 00:11:35.450 }, 00:11:35.450 { 00:11:35.450 "name": "BaseBdev2", 00:11:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.450 "is_configured": false, 00:11:35.450 "data_offset": 0, 00:11:35.450 "data_size": 0 00:11:35.450 }, 00:11:35.450 { 00:11:35.450 "name": "BaseBdev3", 00:11:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.450 "is_configured": false, 00:11:35.450 "data_offset": 0, 00:11:35.450 "data_size": 0 00:11:35.450 }, 00:11:35.450 { 00:11:35.450 "name": "BaseBdev4", 00:11:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.450 "is_configured": false, 00:11:35.450 "data_offset": 0, 00:11:35.450 "data_size": 0 00:11:35.450 } 00:11:35.450 ] 00:11:35.450 }' 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.450 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.018 [2024-11-19 10:05:49.982654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.018 [2024-11-19 10:05:49.982711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.018 [2024-11-19 10:05:49.990642] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.018 [2024-11-19 10:05:49.990700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.018 [2024-11-19 10:05:49.990716] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.018 [2024-11-19 10:05:49.990732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.018 [2024-11-19 10:05:49.990742] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.018 [2024-11-19 10:05:49.990757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.018 [2024-11-19 10:05:49.990767] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.018 [2024-11-19 10:05:49.990794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.018 10:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.019 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.019 10:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.019 [2024-11-19 10:05:50.039510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.019 BaseBdev1 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.019 [ 00:11:36.019 { 00:11:36.019 "name": "BaseBdev1", 00:11:36.019 "aliases": [ 00:11:36.019 "3dc6c958-970b-4145-8955-2280c9a65ea9" 00:11:36.019 ], 00:11:36.019 "product_name": "Malloc disk", 00:11:36.019 "block_size": 512, 00:11:36.019 "num_blocks": 65536, 00:11:36.019 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:36.019 "assigned_rate_limits": { 00:11:36.019 "rw_ios_per_sec": 0, 00:11:36.019 "rw_mbytes_per_sec": 0, 00:11:36.019 "r_mbytes_per_sec": 0, 00:11:36.019 "w_mbytes_per_sec": 0 00:11:36.019 }, 00:11:36.019 "claimed": true, 00:11:36.019 "claim_type": "exclusive_write", 00:11:36.019 "zoned": false, 00:11:36.019 "supported_io_types": { 00:11:36.019 "read": true, 00:11:36.019 "write": true, 00:11:36.019 "unmap": true, 00:11:36.019 "flush": true, 00:11:36.019 "reset": true, 00:11:36.019 "nvme_admin": false, 00:11:36.019 "nvme_io": false, 00:11:36.019 "nvme_io_md": false, 00:11:36.019 "write_zeroes": true, 00:11:36.019 "zcopy": true, 00:11:36.019 "get_zone_info": false, 00:11:36.019 "zone_management": false, 00:11:36.019 "zone_append": false, 00:11:36.019 "compare": false, 00:11:36.019 "compare_and_write": false, 00:11:36.019 "abort": true, 00:11:36.019 "seek_hole": false, 00:11:36.019 "seek_data": false, 00:11:36.019 "copy": true, 00:11:36.019 "nvme_iov_md": false 00:11:36.019 }, 00:11:36.019 "memory_domains": [ 00:11:36.019 { 00:11:36.019 "dma_device_id": "system", 00:11:36.019 "dma_device_type": 1 00:11:36.019 }, 00:11:36.019 { 00:11:36.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.019 "dma_device_type": 2 00:11:36.019 } 00:11:36.019 ], 00:11:36.019 "driver_specific": {} 00:11:36.019 } 00:11:36.019 ] 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.019 "name": "Existed_Raid", 00:11:36.019 "uuid": "904c82e0-5562-4d0b-921a-933070e2aa4f", 00:11:36.019 "strip_size_kb": 64, 00:11:36.019 "state": "configuring", 00:11:36.019 "raid_level": "concat", 00:11:36.019 "superblock": true, 00:11:36.019 "num_base_bdevs": 4, 00:11:36.019 "num_base_bdevs_discovered": 1, 00:11:36.019 "num_base_bdevs_operational": 4, 00:11:36.019 "base_bdevs_list": [ 00:11:36.019 { 00:11:36.019 "name": "BaseBdev1", 00:11:36.019 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:36.019 "is_configured": true, 00:11:36.019 "data_offset": 2048, 00:11:36.019 "data_size": 63488 00:11:36.019 }, 00:11:36.019 { 00:11:36.019 "name": "BaseBdev2", 00:11:36.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.019 "is_configured": false, 00:11:36.019 "data_offset": 0, 00:11:36.019 "data_size": 0 00:11:36.019 }, 00:11:36.019 { 00:11:36.019 "name": "BaseBdev3", 00:11:36.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.019 "is_configured": false, 00:11:36.019 "data_offset": 0, 00:11:36.019 "data_size": 0 00:11:36.019 }, 00:11:36.019 { 00:11:36.019 "name": "BaseBdev4", 00:11:36.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.019 "is_configured": false, 00:11:36.019 "data_offset": 0, 00:11:36.019 "data_size": 0 00:11:36.019 } 00:11:36.019 ] 00:11:36.019 }' 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.019 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.592 [2024-11-19 10:05:50.607775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.592 [2024-11-19 10:05:50.607907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.592 [2024-11-19 10:05:50.615890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.592 [2024-11-19 10:05:50.618597] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.592 [2024-11-19 10:05:50.618667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.592 [2024-11-19 10:05:50.618683] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.592 [2024-11-19 10:05:50.618699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.592 [2024-11-19 10:05:50.618725] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.592 [2024-11-19 10:05:50.618738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.592 "name": "Existed_Raid", 00:11:36.592 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:36.592 "strip_size_kb": 64, 00:11:36.592 "state": "configuring", 00:11:36.592 "raid_level": "concat", 00:11:36.592 "superblock": true, 00:11:36.592 "num_base_bdevs": 4, 00:11:36.592 "num_base_bdevs_discovered": 1, 00:11:36.592 "num_base_bdevs_operational": 4, 00:11:36.592 "base_bdevs_list": [ 00:11:36.592 { 00:11:36.592 "name": "BaseBdev1", 00:11:36.592 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:36.592 "is_configured": true, 00:11:36.592 "data_offset": 2048, 00:11:36.592 "data_size": 63488 00:11:36.592 }, 00:11:36.592 { 00:11:36.592 "name": "BaseBdev2", 00:11:36.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.592 "is_configured": false, 00:11:36.592 "data_offset": 0, 00:11:36.592 "data_size": 0 00:11:36.592 }, 00:11:36.592 { 00:11:36.592 "name": "BaseBdev3", 00:11:36.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.592 "is_configured": false, 00:11:36.592 "data_offset": 0, 00:11:36.592 "data_size": 0 00:11:36.592 }, 00:11:36.592 { 00:11:36.592 "name": "BaseBdev4", 00:11:36.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.592 "is_configured": false, 00:11:36.592 "data_offset": 0, 00:11:36.592 "data_size": 0 00:11:36.592 } 00:11:36.592 ] 00:11:36.592 }' 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.592 10:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.160 [2024-11-19 10:05:51.136349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.160 BaseBdev2 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.160 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.160 [ 00:11:37.160 { 00:11:37.160 "name": "BaseBdev2", 00:11:37.160 "aliases": [ 00:11:37.160 "92bfbfa5-5330-4948-9b17-882071f5bfa6" 00:11:37.160 ], 00:11:37.160 "product_name": "Malloc disk", 00:11:37.160 "block_size": 512, 00:11:37.160 "num_blocks": 65536, 00:11:37.160 "uuid": "92bfbfa5-5330-4948-9b17-882071f5bfa6", 00:11:37.160 "assigned_rate_limits": { 00:11:37.160 "rw_ios_per_sec": 0, 00:11:37.160 "rw_mbytes_per_sec": 0, 00:11:37.161 "r_mbytes_per_sec": 0, 00:11:37.161 "w_mbytes_per_sec": 0 00:11:37.161 }, 00:11:37.161 "claimed": true, 00:11:37.161 "claim_type": "exclusive_write", 00:11:37.161 "zoned": false, 00:11:37.161 "supported_io_types": { 00:11:37.161 "read": true, 00:11:37.161 "write": true, 00:11:37.161 "unmap": true, 00:11:37.161 "flush": true, 00:11:37.161 "reset": true, 00:11:37.161 "nvme_admin": false, 00:11:37.161 "nvme_io": false, 00:11:37.161 "nvme_io_md": false, 00:11:37.161 "write_zeroes": true, 00:11:37.161 "zcopy": true, 00:11:37.161 "get_zone_info": false, 00:11:37.161 "zone_management": false, 00:11:37.161 "zone_append": false, 00:11:37.161 "compare": false, 00:11:37.161 "compare_and_write": false, 00:11:37.161 "abort": true, 00:11:37.161 "seek_hole": false, 00:11:37.161 "seek_data": false, 00:11:37.161 "copy": true, 00:11:37.161 "nvme_iov_md": false 00:11:37.161 }, 00:11:37.161 "memory_domains": [ 00:11:37.161 { 00:11:37.161 "dma_device_id": "system", 00:11:37.161 "dma_device_type": 1 00:11:37.161 }, 00:11:37.161 { 00:11:37.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.161 "dma_device_type": 2 00:11:37.161 } 00:11:37.161 ], 00:11:37.161 "driver_specific": {} 00:11:37.161 } 00:11:37.161 ] 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.161 "name": "Existed_Raid", 00:11:37.161 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:37.161 "strip_size_kb": 64, 00:11:37.161 "state": "configuring", 00:11:37.161 "raid_level": "concat", 00:11:37.161 "superblock": true, 00:11:37.161 "num_base_bdevs": 4, 00:11:37.161 "num_base_bdevs_discovered": 2, 00:11:37.161 "num_base_bdevs_operational": 4, 00:11:37.161 "base_bdevs_list": [ 00:11:37.161 { 00:11:37.161 "name": "BaseBdev1", 00:11:37.161 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:37.161 "is_configured": true, 00:11:37.161 "data_offset": 2048, 00:11:37.161 "data_size": 63488 00:11:37.161 }, 00:11:37.161 { 00:11:37.161 "name": "BaseBdev2", 00:11:37.161 "uuid": "92bfbfa5-5330-4948-9b17-882071f5bfa6", 00:11:37.161 "is_configured": true, 00:11:37.161 "data_offset": 2048, 00:11:37.161 "data_size": 63488 00:11:37.161 }, 00:11:37.161 { 00:11:37.161 "name": "BaseBdev3", 00:11:37.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.161 "is_configured": false, 00:11:37.161 "data_offset": 0, 00:11:37.161 "data_size": 0 00:11:37.161 }, 00:11:37.161 { 00:11:37.161 "name": "BaseBdev4", 00:11:37.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.161 "is_configured": false, 00:11:37.161 "data_offset": 0, 00:11:37.161 "data_size": 0 00:11:37.161 } 00:11:37.161 ] 00:11:37.161 }' 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.161 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.729 [2024-11-19 10:05:51.748612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.729 BaseBdev3 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.729 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.730 [ 00:11:37.730 { 00:11:37.730 "name": "BaseBdev3", 00:11:37.730 "aliases": [ 00:11:37.730 "81eb561d-f6e4-4203-8aab-eb2fbd9bcacc" 00:11:37.730 ], 00:11:37.730 "product_name": "Malloc disk", 00:11:37.730 "block_size": 512, 00:11:37.730 "num_blocks": 65536, 00:11:37.730 "uuid": "81eb561d-f6e4-4203-8aab-eb2fbd9bcacc", 00:11:37.730 "assigned_rate_limits": { 00:11:37.730 "rw_ios_per_sec": 0, 00:11:37.730 "rw_mbytes_per_sec": 0, 00:11:37.730 "r_mbytes_per_sec": 0, 00:11:37.730 "w_mbytes_per_sec": 0 00:11:37.730 }, 00:11:37.730 "claimed": true, 00:11:37.730 "claim_type": "exclusive_write", 00:11:37.730 "zoned": false, 00:11:37.730 "supported_io_types": { 00:11:37.730 "read": true, 00:11:37.730 "write": true, 00:11:37.730 "unmap": true, 00:11:37.730 "flush": true, 00:11:37.730 "reset": true, 00:11:37.730 "nvme_admin": false, 00:11:37.730 "nvme_io": false, 00:11:37.730 "nvme_io_md": false, 00:11:37.730 "write_zeroes": true, 00:11:37.730 "zcopy": true, 00:11:37.730 "get_zone_info": false, 00:11:37.730 "zone_management": false, 00:11:37.730 "zone_append": false, 00:11:37.730 "compare": false, 00:11:37.730 "compare_and_write": false, 00:11:37.730 "abort": true, 00:11:37.730 "seek_hole": false, 00:11:37.730 "seek_data": false, 00:11:37.730 "copy": true, 00:11:37.730 "nvme_iov_md": false 00:11:37.730 }, 00:11:37.730 "memory_domains": [ 00:11:37.730 { 00:11:37.730 "dma_device_id": "system", 00:11:37.730 "dma_device_type": 1 00:11:37.730 }, 00:11:37.730 { 00:11:37.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.730 "dma_device_type": 2 00:11:37.730 } 00:11:37.730 ], 00:11:37.730 "driver_specific": {} 00:11:37.730 } 00:11:37.730 ] 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.730 "name": "Existed_Raid", 00:11:37.730 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:37.730 "strip_size_kb": 64, 00:11:37.730 "state": "configuring", 00:11:37.730 "raid_level": "concat", 00:11:37.730 "superblock": true, 00:11:37.730 "num_base_bdevs": 4, 00:11:37.730 "num_base_bdevs_discovered": 3, 00:11:37.730 "num_base_bdevs_operational": 4, 00:11:37.730 "base_bdevs_list": [ 00:11:37.730 { 00:11:37.730 "name": "BaseBdev1", 00:11:37.730 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:37.730 "is_configured": true, 00:11:37.730 "data_offset": 2048, 00:11:37.730 "data_size": 63488 00:11:37.730 }, 00:11:37.730 { 00:11:37.730 "name": "BaseBdev2", 00:11:37.730 "uuid": "92bfbfa5-5330-4948-9b17-882071f5bfa6", 00:11:37.730 "is_configured": true, 00:11:37.730 "data_offset": 2048, 00:11:37.730 "data_size": 63488 00:11:37.730 }, 00:11:37.730 { 00:11:37.730 "name": "BaseBdev3", 00:11:37.730 "uuid": "81eb561d-f6e4-4203-8aab-eb2fbd9bcacc", 00:11:37.730 "is_configured": true, 00:11:37.730 "data_offset": 2048, 00:11:37.730 "data_size": 63488 00:11:37.730 }, 00:11:37.730 { 00:11:37.730 "name": "BaseBdev4", 00:11:37.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.730 "is_configured": false, 00:11:37.730 "data_offset": 0, 00:11:37.730 "data_size": 0 00:11:37.730 } 00:11:37.730 ] 00:11:37.730 }' 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.730 10:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.300 [2024-11-19 10:05:52.353011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.300 [2024-11-19 10:05:52.353430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.300 [2024-11-19 10:05:52.353450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.300 BaseBdev4 00:11:38.300 [2024-11-19 10:05:52.353834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.300 [2024-11-19 10:05:52.354052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.300 [2024-11-19 10:05:52.354081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.300 [2024-11-19 10:05:52.354274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.300 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.301 [ 00:11:38.301 { 00:11:38.301 "name": "BaseBdev4", 00:11:38.301 "aliases": [ 00:11:38.301 "3fdde382-58c5-48a1-bdcf-ead9792d8eb9" 00:11:38.301 ], 00:11:38.301 "product_name": "Malloc disk", 00:11:38.301 "block_size": 512, 00:11:38.301 "num_blocks": 65536, 00:11:38.301 "uuid": "3fdde382-58c5-48a1-bdcf-ead9792d8eb9", 00:11:38.301 "assigned_rate_limits": { 00:11:38.301 "rw_ios_per_sec": 0, 00:11:38.301 "rw_mbytes_per_sec": 0, 00:11:38.301 "r_mbytes_per_sec": 0, 00:11:38.301 "w_mbytes_per_sec": 0 00:11:38.301 }, 00:11:38.301 "claimed": true, 00:11:38.301 "claim_type": "exclusive_write", 00:11:38.301 "zoned": false, 00:11:38.301 "supported_io_types": { 00:11:38.301 "read": true, 00:11:38.301 "write": true, 00:11:38.301 "unmap": true, 00:11:38.301 "flush": true, 00:11:38.301 "reset": true, 00:11:38.301 "nvme_admin": false, 00:11:38.301 "nvme_io": false, 00:11:38.301 "nvme_io_md": false, 00:11:38.301 "write_zeroes": true, 00:11:38.301 "zcopy": true, 00:11:38.301 "get_zone_info": false, 00:11:38.301 "zone_management": false, 00:11:38.301 "zone_append": false, 00:11:38.301 "compare": false, 00:11:38.301 "compare_and_write": false, 00:11:38.301 "abort": true, 00:11:38.301 "seek_hole": false, 00:11:38.301 "seek_data": false, 00:11:38.301 "copy": true, 00:11:38.301 "nvme_iov_md": false 00:11:38.301 }, 00:11:38.301 "memory_domains": [ 00:11:38.301 { 00:11:38.301 "dma_device_id": "system", 00:11:38.301 "dma_device_type": 1 00:11:38.301 }, 00:11:38.301 { 00:11:38.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.301 "dma_device_type": 2 00:11:38.301 } 00:11:38.301 ], 00:11:38.301 "driver_specific": {} 00:11:38.301 } 00:11:38.301 ] 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.301 "name": "Existed_Raid", 00:11:38.301 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:38.301 "strip_size_kb": 64, 00:11:38.301 "state": "online", 00:11:38.301 "raid_level": "concat", 00:11:38.301 "superblock": true, 00:11:38.301 "num_base_bdevs": 4, 00:11:38.301 "num_base_bdevs_discovered": 4, 00:11:38.301 "num_base_bdevs_operational": 4, 00:11:38.301 "base_bdevs_list": [ 00:11:38.301 { 00:11:38.301 "name": "BaseBdev1", 00:11:38.301 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:38.301 "is_configured": true, 00:11:38.301 "data_offset": 2048, 00:11:38.301 "data_size": 63488 00:11:38.301 }, 00:11:38.301 { 00:11:38.301 "name": "BaseBdev2", 00:11:38.301 "uuid": "92bfbfa5-5330-4948-9b17-882071f5bfa6", 00:11:38.301 "is_configured": true, 00:11:38.301 "data_offset": 2048, 00:11:38.301 "data_size": 63488 00:11:38.301 }, 00:11:38.301 { 00:11:38.301 "name": "BaseBdev3", 00:11:38.301 "uuid": "81eb561d-f6e4-4203-8aab-eb2fbd9bcacc", 00:11:38.301 "is_configured": true, 00:11:38.301 "data_offset": 2048, 00:11:38.301 "data_size": 63488 00:11:38.301 }, 00:11:38.301 { 00:11:38.301 "name": "BaseBdev4", 00:11:38.301 "uuid": "3fdde382-58c5-48a1-bdcf-ead9792d8eb9", 00:11:38.301 "is_configured": true, 00:11:38.301 "data_offset": 2048, 00:11:38.301 "data_size": 63488 00:11:38.301 } 00:11:38.301 ] 00:11:38.301 }' 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.301 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.869 [2024-11-19 10:05:52.929729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.869 "name": "Existed_Raid", 00:11:38.869 "aliases": [ 00:11:38.869 "e0a8c391-1dcb-465a-aed1-c9b14f4d8186" 00:11:38.869 ], 00:11:38.869 "product_name": "Raid Volume", 00:11:38.869 "block_size": 512, 00:11:38.869 "num_blocks": 253952, 00:11:38.869 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:38.869 "assigned_rate_limits": { 00:11:38.869 "rw_ios_per_sec": 0, 00:11:38.869 "rw_mbytes_per_sec": 0, 00:11:38.869 "r_mbytes_per_sec": 0, 00:11:38.869 "w_mbytes_per_sec": 0 00:11:38.869 }, 00:11:38.869 "claimed": false, 00:11:38.869 "zoned": false, 00:11:38.869 "supported_io_types": { 00:11:38.869 "read": true, 00:11:38.869 "write": true, 00:11:38.869 "unmap": true, 00:11:38.869 "flush": true, 00:11:38.869 "reset": true, 00:11:38.869 "nvme_admin": false, 00:11:38.869 "nvme_io": false, 00:11:38.869 "nvme_io_md": false, 00:11:38.869 "write_zeroes": true, 00:11:38.869 "zcopy": false, 00:11:38.869 "get_zone_info": false, 00:11:38.869 "zone_management": false, 00:11:38.869 "zone_append": false, 00:11:38.869 "compare": false, 00:11:38.869 "compare_and_write": false, 00:11:38.869 "abort": false, 00:11:38.869 "seek_hole": false, 00:11:38.869 "seek_data": false, 00:11:38.869 "copy": false, 00:11:38.869 "nvme_iov_md": false 00:11:38.869 }, 00:11:38.869 "memory_domains": [ 00:11:38.869 { 00:11:38.869 "dma_device_id": "system", 00:11:38.869 "dma_device_type": 1 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.869 "dma_device_type": 2 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "system", 00:11:38.869 "dma_device_type": 1 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.869 "dma_device_type": 2 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "system", 00:11:38.869 "dma_device_type": 1 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.869 "dma_device_type": 2 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "system", 00:11:38.869 "dma_device_type": 1 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.869 "dma_device_type": 2 00:11:38.869 } 00:11:38.869 ], 00:11:38.869 "driver_specific": { 00:11:38.869 "raid": { 00:11:38.869 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:38.869 "strip_size_kb": 64, 00:11:38.869 "state": "online", 00:11:38.869 "raid_level": "concat", 00:11:38.869 "superblock": true, 00:11:38.869 "num_base_bdevs": 4, 00:11:38.869 "num_base_bdevs_discovered": 4, 00:11:38.869 "num_base_bdevs_operational": 4, 00:11:38.869 "base_bdevs_list": [ 00:11:38.869 { 00:11:38.869 "name": "BaseBdev1", 00:11:38.869 "uuid": "3dc6c958-970b-4145-8955-2280c9a65ea9", 00:11:38.869 "is_configured": true, 00:11:38.869 "data_offset": 2048, 00:11:38.869 "data_size": 63488 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "name": "BaseBdev2", 00:11:38.869 "uuid": "92bfbfa5-5330-4948-9b17-882071f5bfa6", 00:11:38.869 "is_configured": true, 00:11:38.869 "data_offset": 2048, 00:11:38.869 "data_size": 63488 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "name": "BaseBdev3", 00:11:38.869 "uuid": "81eb561d-f6e4-4203-8aab-eb2fbd9bcacc", 00:11:38.869 "is_configured": true, 00:11:38.869 "data_offset": 2048, 00:11:38.869 "data_size": 63488 00:11:38.869 }, 00:11:38.869 { 00:11:38.869 "name": "BaseBdev4", 00:11:38.869 "uuid": "3fdde382-58c5-48a1-bdcf-ead9792d8eb9", 00:11:38.869 "is_configured": true, 00:11:38.869 "data_offset": 2048, 00:11:38.869 "data_size": 63488 00:11:38.869 } 00:11:38.869 ] 00:11:38.869 } 00:11:38.869 } 00:11:38.869 }' 00:11:38.869 10:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.869 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:38.869 BaseBdev2 00:11:38.869 BaseBdev3 00:11:38.869 BaseBdev4' 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.870 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.131 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.131 [2024-11-19 10:05:53.301527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.131 [2024-11-19 10:05:53.301571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.131 [2024-11-19 10:05:53.301646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.391 "name": "Existed_Raid", 00:11:39.391 "uuid": "e0a8c391-1dcb-465a-aed1-c9b14f4d8186", 00:11:39.391 "strip_size_kb": 64, 00:11:39.391 "state": "offline", 00:11:39.391 "raid_level": "concat", 00:11:39.391 "superblock": true, 00:11:39.391 "num_base_bdevs": 4, 00:11:39.391 "num_base_bdevs_discovered": 3, 00:11:39.391 "num_base_bdevs_operational": 3, 00:11:39.391 "base_bdevs_list": [ 00:11:39.391 { 00:11:39.391 "name": null, 00:11:39.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.391 "is_configured": false, 00:11:39.391 "data_offset": 0, 00:11:39.391 "data_size": 63488 00:11:39.391 }, 00:11:39.391 { 00:11:39.391 "name": "BaseBdev2", 00:11:39.391 "uuid": "92bfbfa5-5330-4948-9b17-882071f5bfa6", 00:11:39.391 "is_configured": true, 00:11:39.391 "data_offset": 2048, 00:11:39.391 "data_size": 63488 00:11:39.391 }, 00:11:39.391 { 00:11:39.391 "name": "BaseBdev3", 00:11:39.391 "uuid": "81eb561d-f6e4-4203-8aab-eb2fbd9bcacc", 00:11:39.391 "is_configured": true, 00:11:39.391 "data_offset": 2048, 00:11:39.391 "data_size": 63488 00:11:39.391 }, 00:11:39.391 { 00:11:39.391 "name": "BaseBdev4", 00:11:39.391 "uuid": "3fdde382-58c5-48a1-bdcf-ead9792d8eb9", 00:11:39.391 "is_configured": true, 00:11:39.391 "data_offset": 2048, 00:11:39.391 "data_size": 63488 00:11:39.391 } 00:11:39.391 ] 00:11:39.391 }' 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.391 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.959 10:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.959 [2024-11-19 10:05:54.006119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.959 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.959 [2024-11-19 10:05:54.167303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.219 [2024-11-19 10:05:54.325598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:40.219 [2024-11-19 10:05:54.325665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.219 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 BaseBdev2 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 [ 00:11:40.479 { 00:11:40.479 "name": "BaseBdev2", 00:11:40.479 "aliases": [ 00:11:40.479 "4033b0d4-64ed-488d-88e6-1d7bc5efa95e" 00:11:40.479 ], 00:11:40.479 "product_name": "Malloc disk", 00:11:40.479 "block_size": 512, 00:11:40.479 "num_blocks": 65536, 00:11:40.479 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:40.479 "assigned_rate_limits": { 00:11:40.479 "rw_ios_per_sec": 0, 00:11:40.479 "rw_mbytes_per_sec": 0, 00:11:40.479 "r_mbytes_per_sec": 0, 00:11:40.479 "w_mbytes_per_sec": 0 00:11:40.479 }, 00:11:40.479 "claimed": false, 00:11:40.479 "zoned": false, 00:11:40.479 "supported_io_types": { 00:11:40.479 "read": true, 00:11:40.479 "write": true, 00:11:40.479 "unmap": true, 00:11:40.479 "flush": true, 00:11:40.479 "reset": true, 00:11:40.479 "nvme_admin": false, 00:11:40.479 "nvme_io": false, 00:11:40.479 "nvme_io_md": false, 00:11:40.479 "write_zeroes": true, 00:11:40.479 "zcopy": true, 00:11:40.479 "get_zone_info": false, 00:11:40.479 "zone_management": false, 00:11:40.479 "zone_append": false, 00:11:40.479 "compare": false, 00:11:40.479 "compare_and_write": false, 00:11:40.479 "abort": true, 00:11:40.479 "seek_hole": false, 00:11:40.479 "seek_data": false, 00:11:40.479 "copy": true, 00:11:40.479 "nvme_iov_md": false 00:11:40.479 }, 00:11:40.479 "memory_domains": [ 00:11:40.479 { 00:11:40.479 "dma_device_id": "system", 00:11:40.479 "dma_device_type": 1 00:11:40.479 }, 00:11:40.479 { 00:11:40.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.479 "dma_device_type": 2 00:11:40.479 } 00:11:40.479 ], 00:11:40.479 "driver_specific": {} 00:11:40.479 } 00:11:40.479 ] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 BaseBdev3 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 [ 00:11:40.479 { 00:11:40.479 "name": "BaseBdev3", 00:11:40.479 "aliases": [ 00:11:40.479 "f47177ec-748c-4309-9d33-3ea9dbf504b5" 00:11:40.479 ], 00:11:40.479 "product_name": "Malloc disk", 00:11:40.479 "block_size": 512, 00:11:40.479 "num_blocks": 65536, 00:11:40.479 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:40.479 "assigned_rate_limits": { 00:11:40.479 "rw_ios_per_sec": 0, 00:11:40.479 "rw_mbytes_per_sec": 0, 00:11:40.479 "r_mbytes_per_sec": 0, 00:11:40.479 "w_mbytes_per_sec": 0 00:11:40.479 }, 00:11:40.479 "claimed": false, 00:11:40.479 "zoned": false, 00:11:40.479 "supported_io_types": { 00:11:40.479 "read": true, 00:11:40.479 "write": true, 00:11:40.479 "unmap": true, 00:11:40.479 "flush": true, 00:11:40.479 "reset": true, 00:11:40.479 "nvme_admin": false, 00:11:40.479 "nvme_io": false, 00:11:40.479 "nvme_io_md": false, 00:11:40.479 "write_zeroes": true, 00:11:40.479 "zcopy": true, 00:11:40.480 "get_zone_info": false, 00:11:40.480 "zone_management": false, 00:11:40.480 "zone_append": false, 00:11:40.480 "compare": false, 00:11:40.480 "compare_and_write": false, 00:11:40.480 "abort": true, 00:11:40.480 "seek_hole": false, 00:11:40.480 "seek_data": false, 00:11:40.480 "copy": true, 00:11:40.480 "nvme_iov_md": false 00:11:40.480 }, 00:11:40.480 "memory_domains": [ 00:11:40.480 { 00:11:40.480 "dma_device_id": "system", 00:11:40.480 "dma_device_type": 1 00:11:40.480 }, 00:11:40.480 { 00:11:40.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.480 "dma_device_type": 2 00:11:40.480 } 00:11:40.480 ], 00:11:40.480 "driver_specific": {} 00:11:40.480 } 00:11:40.480 ] 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.480 BaseBdev4 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.480 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.480 [ 00:11:40.480 { 00:11:40.480 "name": "BaseBdev4", 00:11:40.480 "aliases": [ 00:11:40.480 "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d" 00:11:40.480 ], 00:11:40.480 "product_name": "Malloc disk", 00:11:40.480 "block_size": 512, 00:11:40.480 "num_blocks": 65536, 00:11:40.480 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:40.480 "assigned_rate_limits": { 00:11:40.480 "rw_ios_per_sec": 0, 00:11:40.480 "rw_mbytes_per_sec": 0, 00:11:40.480 "r_mbytes_per_sec": 0, 00:11:40.480 "w_mbytes_per_sec": 0 00:11:40.480 }, 00:11:40.480 "claimed": false, 00:11:40.480 "zoned": false, 00:11:40.480 "supported_io_types": { 00:11:40.480 "read": true, 00:11:40.480 "write": true, 00:11:40.480 "unmap": true, 00:11:40.480 "flush": true, 00:11:40.480 "reset": true, 00:11:40.790 "nvme_admin": false, 00:11:40.790 "nvme_io": false, 00:11:40.790 "nvme_io_md": false, 00:11:40.790 "write_zeroes": true, 00:11:40.790 "zcopy": true, 00:11:40.790 "get_zone_info": false, 00:11:40.790 "zone_management": false, 00:11:40.790 "zone_append": false, 00:11:40.790 "compare": false, 00:11:40.790 "compare_and_write": false, 00:11:40.790 "abort": true, 00:11:40.790 "seek_hole": false, 00:11:40.790 "seek_data": false, 00:11:40.790 "copy": true, 00:11:40.790 "nvme_iov_md": false 00:11:40.790 }, 00:11:40.790 "memory_domains": [ 00:11:40.790 { 00:11:40.790 "dma_device_id": "system", 00:11:40.790 "dma_device_type": 1 00:11:40.790 }, 00:11:40.790 { 00:11:40.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.790 "dma_device_type": 2 00:11:40.790 } 00:11:40.790 ], 00:11:40.790 "driver_specific": {} 00:11:40.790 } 00:11:40.790 ] 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.790 [2024-11-19 10:05:54.723970] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.790 [2024-11-19 10:05:54.724055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.790 [2024-11-19 10:05:54.724110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.790 [2024-11-19 10:05:54.727435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.790 [2024-11-19 10:05:54.727671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.790 "name": "Existed_Raid", 00:11:40.790 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:40.790 "strip_size_kb": 64, 00:11:40.790 "state": "configuring", 00:11:40.790 "raid_level": "concat", 00:11:40.790 "superblock": true, 00:11:40.790 "num_base_bdevs": 4, 00:11:40.790 "num_base_bdevs_discovered": 3, 00:11:40.790 "num_base_bdevs_operational": 4, 00:11:40.790 "base_bdevs_list": [ 00:11:40.790 { 00:11:40.790 "name": "BaseBdev1", 00:11:40.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.790 "is_configured": false, 00:11:40.790 "data_offset": 0, 00:11:40.790 "data_size": 0 00:11:40.790 }, 00:11:40.790 { 00:11:40.790 "name": "BaseBdev2", 00:11:40.790 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:40.790 "is_configured": true, 00:11:40.790 "data_offset": 2048, 00:11:40.790 "data_size": 63488 00:11:40.790 }, 00:11:40.790 { 00:11:40.790 "name": "BaseBdev3", 00:11:40.790 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:40.790 "is_configured": true, 00:11:40.790 "data_offset": 2048, 00:11:40.790 "data_size": 63488 00:11:40.790 }, 00:11:40.790 { 00:11:40.790 "name": "BaseBdev4", 00:11:40.790 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:40.790 "is_configured": true, 00:11:40.790 "data_offset": 2048, 00:11:40.790 "data_size": 63488 00:11:40.790 } 00:11:40.790 ] 00:11:40.790 }' 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.790 10:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.050 [2024-11-19 10:05:55.264192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.050 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.309 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.309 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.309 "name": "Existed_Raid", 00:11:41.309 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:41.309 "strip_size_kb": 64, 00:11:41.309 "state": "configuring", 00:11:41.309 "raid_level": "concat", 00:11:41.309 "superblock": true, 00:11:41.309 "num_base_bdevs": 4, 00:11:41.309 "num_base_bdevs_discovered": 2, 00:11:41.309 "num_base_bdevs_operational": 4, 00:11:41.309 "base_bdevs_list": [ 00:11:41.309 { 00:11:41.309 "name": "BaseBdev1", 00:11:41.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.309 "is_configured": false, 00:11:41.309 "data_offset": 0, 00:11:41.309 "data_size": 0 00:11:41.309 }, 00:11:41.309 { 00:11:41.309 "name": null, 00:11:41.309 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:41.309 "is_configured": false, 00:11:41.309 "data_offset": 0, 00:11:41.309 "data_size": 63488 00:11:41.309 }, 00:11:41.309 { 00:11:41.309 "name": "BaseBdev3", 00:11:41.309 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:41.309 "is_configured": true, 00:11:41.309 "data_offset": 2048, 00:11:41.309 "data_size": 63488 00:11:41.309 }, 00:11:41.309 { 00:11:41.309 "name": "BaseBdev4", 00:11:41.309 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:41.309 "is_configured": true, 00:11:41.309 "data_offset": 2048, 00:11:41.309 "data_size": 63488 00:11:41.309 } 00:11:41.309 ] 00:11:41.309 }' 00:11:41.309 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.309 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 [2024-11-19 10:05:55.900475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.877 BaseBdev1 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 [ 00:11:41.877 { 00:11:41.877 "name": "BaseBdev1", 00:11:41.877 "aliases": [ 00:11:41.877 "f519486d-fe9e-4958-9b1e-d1e509977b40" 00:11:41.877 ], 00:11:41.877 "product_name": "Malloc disk", 00:11:41.877 "block_size": 512, 00:11:41.877 "num_blocks": 65536, 00:11:41.877 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:41.877 "assigned_rate_limits": { 00:11:41.877 "rw_ios_per_sec": 0, 00:11:41.877 "rw_mbytes_per_sec": 0, 00:11:41.877 "r_mbytes_per_sec": 0, 00:11:41.877 "w_mbytes_per_sec": 0 00:11:41.877 }, 00:11:41.877 "claimed": true, 00:11:41.877 "claim_type": "exclusive_write", 00:11:41.877 "zoned": false, 00:11:41.877 "supported_io_types": { 00:11:41.877 "read": true, 00:11:41.877 "write": true, 00:11:41.877 "unmap": true, 00:11:41.877 "flush": true, 00:11:41.877 "reset": true, 00:11:41.877 "nvme_admin": false, 00:11:41.877 "nvme_io": false, 00:11:41.877 "nvme_io_md": false, 00:11:41.877 "write_zeroes": true, 00:11:41.877 "zcopy": true, 00:11:41.877 "get_zone_info": false, 00:11:41.877 "zone_management": false, 00:11:41.877 "zone_append": false, 00:11:41.877 "compare": false, 00:11:41.877 "compare_and_write": false, 00:11:41.877 "abort": true, 00:11:41.877 "seek_hole": false, 00:11:41.877 "seek_data": false, 00:11:41.877 "copy": true, 00:11:41.877 "nvme_iov_md": false 00:11:41.877 }, 00:11:41.877 "memory_domains": [ 00:11:41.877 { 00:11:41.877 "dma_device_id": "system", 00:11:41.877 "dma_device_type": 1 00:11:41.877 }, 00:11:41.877 { 00:11:41.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.877 "dma_device_type": 2 00:11:41.877 } 00:11:41.877 ], 00:11:41.877 "driver_specific": {} 00:11:41.877 } 00:11:41.877 ] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.877 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.877 "name": "Existed_Raid", 00:11:41.877 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:41.877 "strip_size_kb": 64, 00:11:41.877 "state": "configuring", 00:11:41.877 "raid_level": "concat", 00:11:41.877 "superblock": true, 00:11:41.877 "num_base_bdevs": 4, 00:11:41.877 "num_base_bdevs_discovered": 3, 00:11:41.877 "num_base_bdevs_operational": 4, 00:11:41.877 "base_bdevs_list": [ 00:11:41.877 { 00:11:41.877 "name": "BaseBdev1", 00:11:41.877 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:41.877 "is_configured": true, 00:11:41.877 "data_offset": 2048, 00:11:41.877 "data_size": 63488 00:11:41.877 }, 00:11:41.877 { 00:11:41.877 "name": null, 00:11:41.877 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:41.877 "is_configured": false, 00:11:41.877 "data_offset": 0, 00:11:41.877 "data_size": 63488 00:11:41.877 }, 00:11:41.877 { 00:11:41.877 "name": "BaseBdev3", 00:11:41.877 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:41.877 "is_configured": true, 00:11:41.877 "data_offset": 2048, 00:11:41.877 "data_size": 63488 00:11:41.877 }, 00:11:41.877 { 00:11:41.878 "name": "BaseBdev4", 00:11:41.878 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:41.878 "is_configured": true, 00:11:41.878 "data_offset": 2048, 00:11:41.878 "data_size": 63488 00:11:41.878 } 00:11:41.878 ] 00:11:41.878 }' 00:11:41.878 10:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.878 10:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.445 [2024-11-19 10:05:56.536817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.445 "name": "Existed_Raid", 00:11:42.445 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:42.445 "strip_size_kb": 64, 00:11:42.445 "state": "configuring", 00:11:42.445 "raid_level": "concat", 00:11:42.445 "superblock": true, 00:11:42.445 "num_base_bdevs": 4, 00:11:42.445 "num_base_bdevs_discovered": 2, 00:11:42.445 "num_base_bdevs_operational": 4, 00:11:42.445 "base_bdevs_list": [ 00:11:42.445 { 00:11:42.445 "name": "BaseBdev1", 00:11:42.445 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:42.445 "is_configured": true, 00:11:42.445 "data_offset": 2048, 00:11:42.445 "data_size": 63488 00:11:42.445 }, 00:11:42.445 { 00:11:42.445 "name": null, 00:11:42.445 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:42.445 "is_configured": false, 00:11:42.445 "data_offset": 0, 00:11:42.445 "data_size": 63488 00:11:42.445 }, 00:11:42.445 { 00:11:42.445 "name": null, 00:11:42.445 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:42.445 "is_configured": false, 00:11:42.445 "data_offset": 0, 00:11:42.445 "data_size": 63488 00:11:42.445 }, 00:11:42.445 { 00:11:42.445 "name": "BaseBdev4", 00:11:42.445 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:42.445 "is_configured": true, 00:11:42.445 "data_offset": 2048, 00:11:42.445 "data_size": 63488 00:11:42.445 } 00:11:42.445 ] 00:11:42.445 }' 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.445 10:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.012 [2024-11-19 10:05:57.149094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.012 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.013 "name": "Existed_Raid", 00:11:43.013 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:43.013 "strip_size_kb": 64, 00:11:43.013 "state": "configuring", 00:11:43.013 "raid_level": "concat", 00:11:43.013 "superblock": true, 00:11:43.013 "num_base_bdevs": 4, 00:11:43.013 "num_base_bdevs_discovered": 3, 00:11:43.013 "num_base_bdevs_operational": 4, 00:11:43.013 "base_bdevs_list": [ 00:11:43.013 { 00:11:43.013 "name": "BaseBdev1", 00:11:43.013 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:43.013 "is_configured": true, 00:11:43.013 "data_offset": 2048, 00:11:43.013 "data_size": 63488 00:11:43.013 }, 00:11:43.013 { 00:11:43.013 "name": null, 00:11:43.013 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:43.013 "is_configured": false, 00:11:43.013 "data_offset": 0, 00:11:43.013 "data_size": 63488 00:11:43.013 }, 00:11:43.013 { 00:11:43.013 "name": "BaseBdev3", 00:11:43.013 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:43.013 "is_configured": true, 00:11:43.013 "data_offset": 2048, 00:11:43.013 "data_size": 63488 00:11:43.013 }, 00:11:43.013 { 00:11:43.013 "name": "BaseBdev4", 00:11:43.013 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:43.013 "is_configured": true, 00:11:43.013 "data_offset": 2048, 00:11:43.013 "data_size": 63488 00:11:43.013 } 00:11:43.013 ] 00:11:43.013 }' 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.013 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.580 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.580 [2024-11-19 10:05:57.769331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.840 "name": "Existed_Raid", 00:11:43.840 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:43.840 "strip_size_kb": 64, 00:11:43.840 "state": "configuring", 00:11:43.840 "raid_level": "concat", 00:11:43.840 "superblock": true, 00:11:43.840 "num_base_bdevs": 4, 00:11:43.840 "num_base_bdevs_discovered": 2, 00:11:43.840 "num_base_bdevs_operational": 4, 00:11:43.840 "base_bdevs_list": [ 00:11:43.840 { 00:11:43.840 "name": null, 00:11:43.840 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:43.840 "is_configured": false, 00:11:43.840 "data_offset": 0, 00:11:43.840 "data_size": 63488 00:11:43.840 }, 00:11:43.840 { 00:11:43.840 "name": null, 00:11:43.840 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:43.840 "is_configured": false, 00:11:43.840 "data_offset": 0, 00:11:43.840 "data_size": 63488 00:11:43.840 }, 00:11:43.840 { 00:11:43.840 "name": "BaseBdev3", 00:11:43.840 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:43.840 "is_configured": true, 00:11:43.840 "data_offset": 2048, 00:11:43.840 "data_size": 63488 00:11:43.840 }, 00:11:43.840 { 00:11:43.840 "name": "BaseBdev4", 00:11:43.840 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:43.840 "is_configured": true, 00:11:43.840 "data_offset": 2048, 00:11:43.840 "data_size": 63488 00:11:43.840 } 00:11:43.840 ] 00:11:43.840 }' 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.840 10:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 [2024-11-19 10:05:58.460335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.408 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.408 "name": "Existed_Raid", 00:11:44.409 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:44.409 "strip_size_kb": 64, 00:11:44.409 "state": "configuring", 00:11:44.409 "raid_level": "concat", 00:11:44.409 "superblock": true, 00:11:44.409 "num_base_bdevs": 4, 00:11:44.409 "num_base_bdevs_discovered": 3, 00:11:44.409 "num_base_bdevs_operational": 4, 00:11:44.409 "base_bdevs_list": [ 00:11:44.409 { 00:11:44.409 "name": null, 00:11:44.409 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:44.409 "is_configured": false, 00:11:44.409 "data_offset": 0, 00:11:44.409 "data_size": 63488 00:11:44.409 }, 00:11:44.409 { 00:11:44.409 "name": "BaseBdev2", 00:11:44.409 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:44.409 "is_configured": true, 00:11:44.409 "data_offset": 2048, 00:11:44.409 "data_size": 63488 00:11:44.409 }, 00:11:44.409 { 00:11:44.409 "name": "BaseBdev3", 00:11:44.409 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:44.409 "is_configured": true, 00:11:44.409 "data_offset": 2048, 00:11:44.409 "data_size": 63488 00:11:44.409 }, 00:11:44.409 { 00:11:44.409 "name": "BaseBdev4", 00:11:44.409 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:44.409 "is_configured": true, 00:11:44.409 "data_offset": 2048, 00:11:44.409 "data_size": 63488 00:11:44.409 } 00:11:44.409 ] 00:11:44.409 }' 00:11:44.409 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.409 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.976 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.976 10:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.976 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.976 10:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f519486d-fe9e-4958-9b1e-d1e509977b40 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.976 [2024-11-19 10:05:59.147837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.976 [2024-11-19 10:05:59.148195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.976 [2024-11-19 10:05:59.148220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.976 NewBaseBdev 00:11:44.976 [2024-11-19 10:05:59.148579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.976 [2024-11-19 10:05:59.148813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.976 [2024-11-19 10:05:59.148863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.976 [2024-11-19 10:05:59.149028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.976 [ 00:11:44.976 { 00:11:44.976 "name": "NewBaseBdev", 00:11:44.976 "aliases": [ 00:11:44.976 "f519486d-fe9e-4958-9b1e-d1e509977b40" 00:11:44.976 ], 00:11:44.976 "product_name": "Malloc disk", 00:11:44.976 "block_size": 512, 00:11:44.976 "num_blocks": 65536, 00:11:44.976 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:44.976 "assigned_rate_limits": { 00:11:44.976 "rw_ios_per_sec": 0, 00:11:44.976 "rw_mbytes_per_sec": 0, 00:11:44.976 "r_mbytes_per_sec": 0, 00:11:44.976 "w_mbytes_per_sec": 0 00:11:44.976 }, 00:11:44.976 "claimed": true, 00:11:44.976 "claim_type": "exclusive_write", 00:11:44.976 "zoned": false, 00:11:44.976 "supported_io_types": { 00:11:44.976 "read": true, 00:11:44.976 "write": true, 00:11:44.976 "unmap": true, 00:11:44.976 "flush": true, 00:11:44.976 "reset": true, 00:11:44.976 "nvme_admin": false, 00:11:44.976 "nvme_io": false, 00:11:44.976 "nvme_io_md": false, 00:11:44.976 "write_zeroes": true, 00:11:44.976 "zcopy": true, 00:11:44.976 "get_zone_info": false, 00:11:44.976 "zone_management": false, 00:11:44.976 "zone_append": false, 00:11:44.976 "compare": false, 00:11:44.976 "compare_and_write": false, 00:11:44.976 "abort": true, 00:11:44.976 "seek_hole": false, 00:11:44.976 "seek_data": false, 00:11:44.976 "copy": true, 00:11:44.976 "nvme_iov_md": false 00:11:44.976 }, 00:11:44.976 "memory_domains": [ 00:11:44.976 { 00:11:44.976 "dma_device_id": "system", 00:11:44.976 "dma_device_type": 1 00:11:44.976 }, 00:11:44.976 { 00:11:44.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.976 "dma_device_type": 2 00:11:44.976 } 00:11:44.976 ], 00:11:44.976 "driver_specific": {} 00:11:44.976 } 00:11:44.976 ] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:44.976 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.977 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.265 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.265 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.265 "name": "Existed_Raid", 00:11:45.265 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:45.265 "strip_size_kb": 64, 00:11:45.265 "state": "online", 00:11:45.265 "raid_level": "concat", 00:11:45.265 "superblock": true, 00:11:45.265 "num_base_bdevs": 4, 00:11:45.265 "num_base_bdevs_discovered": 4, 00:11:45.265 "num_base_bdevs_operational": 4, 00:11:45.265 "base_bdevs_list": [ 00:11:45.265 { 00:11:45.265 "name": "NewBaseBdev", 00:11:45.265 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:45.265 "is_configured": true, 00:11:45.265 "data_offset": 2048, 00:11:45.265 "data_size": 63488 00:11:45.265 }, 00:11:45.265 { 00:11:45.265 "name": "BaseBdev2", 00:11:45.265 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:45.265 "is_configured": true, 00:11:45.265 "data_offset": 2048, 00:11:45.265 "data_size": 63488 00:11:45.265 }, 00:11:45.265 { 00:11:45.265 "name": "BaseBdev3", 00:11:45.265 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:45.265 "is_configured": true, 00:11:45.265 "data_offset": 2048, 00:11:45.265 "data_size": 63488 00:11:45.265 }, 00:11:45.265 { 00:11:45.265 "name": "BaseBdev4", 00:11:45.265 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:45.265 "is_configured": true, 00:11:45.265 "data_offset": 2048, 00:11:45.265 "data_size": 63488 00:11:45.265 } 00:11:45.265 ] 00:11:45.265 }' 00:11:45.265 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.265 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.524 [2024-11-19 10:05:59.716681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.524 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.783 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.783 "name": "Existed_Raid", 00:11:45.783 "aliases": [ 00:11:45.783 "eaccd351-2804-498b-b586-9d2f994e0a3f" 00:11:45.783 ], 00:11:45.783 "product_name": "Raid Volume", 00:11:45.783 "block_size": 512, 00:11:45.783 "num_blocks": 253952, 00:11:45.783 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:45.783 "assigned_rate_limits": { 00:11:45.783 "rw_ios_per_sec": 0, 00:11:45.783 "rw_mbytes_per_sec": 0, 00:11:45.783 "r_mbytes_per_sec": 0, 00:11:45.783 "w_mbytes_per_sec": 0 00:11:45.783 }, 00:11:45.783 "claimed": false, 00:11:45.783 "zoned": false, 00:11:45.783 "supported_io_types": { 00:11:45.783 "read": true, 00:11:45.783 "write": true, 00:11:45.783 "unmap": true, 00:11:45.783 "flush": true, 00:11:45.783 "reset": true, 00:11:45.783 "nvme_admin": false, 00:11:45.783 "nvme_io": false, 00:11:45.783 "nvme_io_md": false, 00:11:45.783 "write_zeroes": true, 00:11:45.783 "zcopy": false, 00:11:45.783 "get_zone_info": false, 00:11:45.783 "zone_management": false, 00:11:45.783 "zone_append": false, 00:11:45.783 "compare": false, 00:11:45.783 "compare_and_write": false, 00:11:45.783 "abort": false, 00:11:45.783 "seek_hole": false, 00:11:45.783 "seek_data": false, 00:11:45.783 "copy": false, 00:11:45.783 "nvme_iov_md": false 00:11:45.783 }, 00:11:45.783 "memory_domains": [ 00:11:45.783 { 00:11:45.783 "dma_device_id": "system", 00:11:45.783 "dma_device_type": 1 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.784 "dma_device_type": 2 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "system", 00:11:45.784 "dma_device_type": 1 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.784 "dma_device_type": 2 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "system", 00:11:45.784 "dma_device_type": 1 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.784 "dma_device_type": 2 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "system", 00:11:45.784 "dma_device_type": 1 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.784 "dma_device_type": 2 00:11:45.784 } 00:11:45.784 ], 00:11:45.784 "driver_specific": { 00:11:45.784 "raid": { 00:11:45.784 "uuid": "eaccd351-2804-498b-b586-9d2f994e0a3f", 00:11:45.784 "strip_size_kb": 64, 00:11:45.784 "state": "online", 00:11:45.784 "raid_level": "concat", 00:11:45.784 "superblock": true, 00:11:45.784 "num_base_bdevs": 4, 00:11:45.784 "num_base_bdevs_discovered": 4, 00:11:45.784 "num_base_bdevs_operational": 4, 00:11:45.784 "base_bdevs_list": [ 00:11:45.784 { 00:11:45.784 "name": "NewBaseBdev", 00:11:45.784 "uuid": "f519486d-fe9e-4958-9b1e-d1e509977b40", 00:11:45.784 "is_configured": true, 00:11:45.784 "data_offset": 2048, 00:11:45.784 "data_size": 63488 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "name": "BaseBdev2", 00:11:45.784 "uuid": "4033b0d4-64ed-488d-88e6-1d7bc5efa95e", 00:11:45.784 "is_configured": true, 00:11:45.784 "data_offset": 2048, 00:11:45.784 "data_size": 63488 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "name": "BaseBdev3", 00:11:45.784 "uuid": "f47177ec-748c-4309-9d33-3ea9dbf504b5", 00:11:45.784 "is_configured": true, 00:11:45.784 "data_offset": 2048, 00:11:45.784 "data_size": 63488 00:11:45.784 }, 00:11:45.784 { 00:11:45.784 "name": "BaseBdev4", 00:11:45.784 "uuid": "ef740bc0-a5ea-4bb8-8ddd-b103e0bf1a4d", 00:11:45.784 "is_configured": true, 00:11:45.784 "data_offset": 2048, 00:11:45.784 "data_size": 63488 00:11:45.784 } 00:11:45.784 ] 00:11:45.784 } 00:11:45.784 } 00:11:45.784 }' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:45.784 BaseBdev2 00:11:45.784 BaseBdev3 00:11:45.784 BaseBdev4' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.784 10:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.043 [2024-11-19 10:06:00.084177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.043 [2024-11-19 10:06:00.084219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.043 [2024-11-19 10:06:00.084357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.043 [2024-11-19 10:06:00.084485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.043 [2024-11-19 10:06:00.084502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71958 00:11:46.043 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71958 ']' 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71958 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71958 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.044 killing process with pid 71958 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71958' 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71958 00:11:46.044 [2024-11-19 10:06:00.126283] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.044 10:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71958 00:11:46.303 [2024-11-19 10:06:00.509528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.678 ************************************ 00:11:47.678 END TEST raid_state_function_test_sb 00:11:47.678 ************************************ 00:11:47.678 10:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:47.678 00:11:47.678 real 0m13.350s 00:11:47.678 user 0m21.949s 00:11:47.678 sys 0m1.933s 00:11:47.678 10:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.678 10:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.678 10:06:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:47.678 10:06:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.678 10:06:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.678 10:06:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.678 ************************************ 00:11:47.678 START TEST raid_superblock_test 00:11:47.678 ************************************ 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:47.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72646 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72646 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72646 ']' 00:11:47.678 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.679 10:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:47.679 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.679 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.679 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.679 10:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.679 [2024-11-19 10:06:01.833198] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:47.679 [2024-11-19 10:06:01.833405] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72646 ] 00:11:47.938 [2024-11-19 10:06:02.020685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.196 [2024-11-19 10:06:02.173389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.196 [2024-11-19 10:06:02.410939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.196 [2024-11-19 10:06:02.411036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:48.763 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.764 malloc1 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.764 [2024-11-19 10:06:02.895007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.764 [2024-11-19 10:06:02.895232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.764 [2024-11-19 10:06:02.895314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:48.764 [2024-11-19 10:06:02.895549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.764 [2024-11-19 10:06:02.898617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.764 [2024-11-19 10:06:02.898799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.764 pt1 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.764 malloc2 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.764 [2024-11-19 10:06:02.957831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.764 [2024-11-19 10:06:02.957899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.764 [2024-11-19 10:06:02.957936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:48.764 [2024-11-19 10:06:02.957953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.764 [2024-11-19 10:06:02.960964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.764 [2024-11-19 10:06:02.961005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.764 pt2 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.764 10:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.023 malloc3 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.023 [2024-11-19 10:06:03.028809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.023 [2024-11-19 10:06:03.029007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.023 [2024-11-19 10:06:03.029089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:49.023 [2024-11-19 10:06:03.029239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.023 [2024-11-19 10:06:03.032245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.023 [2024-11-19 10:06:03.032410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.023 pt3 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.023 malloc4 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.023 [2024-11-19 10:06:03.089426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.023 [2024-11-19 10:06:03.089619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.023 [2024-11-19 10:06:03.089696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:49.023 [2024-11-19 10:06:03.089820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.023 [2024-11-19 10:06:03.092803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.023 [2024-11-19 10:06:03.092952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.023 pt4 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.023 [2024-11-19 10:06:03.101736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:49.023 [2024-11-19 10:06:03.104411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.023 [2024-11-19 10:06:03.104510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.023 [2024-11-19 10:06:03.104608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.023 [2024-11-19 10:06:03.104909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:49.023 [2024-11-19 10:06:03.104928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:49.023 [2024-11-19 10:06:03.105276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:49.023 [2024-11-19 10:06:03.105515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:49.023 [2024-11-19 10:06:03.105536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:49.023 [2024-11-19 10:06:03.105766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.023 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.024 "name": "raid_bdev1", 00:11:49.024 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:49.024 "strip_size_kb": 64, 00:11:49.024 "state": "online", 00:11:49.024 "raid_level": "concat", 00:11:49.024 "superblock": true, 00:11:49.024 "num_base_bdevs": 4, 00:11:49.024 "num_base_bdevs_discovered": 4, 00:11:49.024 "num_base_bdevs_operational": 4, 00:11:49.024 "base_bdevs_list": [ 00:11:49.024 { 00:11:49.024 "name": "pt1", 00:11:49.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.024 "is_configured": true, 00:11:49.024 "data_offset": 2048, 00:11:49.024 "data_size": 63488 00:11:49.024 }, 00:11:49.024 { 00:11:49.024 "name": "pt2", 00:11:49.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.024 "is_configured": true, 00:11:49.024 "data_offset": 2048, 00:11:49.024 "data_size": 63488 00:11:49.024 }, 00:11:49.024 { 00:11:49.024 "name": "pt3", 00:11:49.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.024 "is_configured": true, 00:11:49.024 "data_offset": 2048, 00:11:49.024 "data_size": 63488 00:11:49.024 }, 00:11:49.024 { 00:11:49.024 "name": "pt4", 00:11:49.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.024 "is_configured": true, 00:11:49.024 "data_offset": 2048, 00:11:49.024 "data_size": 63488 00:11:49.024 } 00:11:49.024 ] 00:11:49.024 }' 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.024 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.591 [2024-11-19 10:06:03.630354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.591 "name": "raid_bdev1", 00:11:49.591 "aliases": [ 00:11:49.591 "47f95ee0-c68a-4937-8005-c9f69b68f367" 00:11:49.591 ], 00:11:49.591 "product_name": "Raid Volume", 00:11:49.591 "block_size": 512, 00:11:49.591 "num_blocks": 253952, 00:11:49.591 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:49.591 "assigned_rate_limits": { 00:11:49.591 "rw_ios_per_sec": 0, 00:11:49.591 "rw_mbytes_per_sec": 0, 00:11:49.591 "r_mbytes_per_sec": 0, 00:11:49.591 "w_mbytes_per_sec": 0 00:11:49.591 }, 00:11:49.591 "claimed": false, 00:11:49.591 "zoned": false, 00:11:49.591 "supported_io_types": { 00:11:49.591 "read": true, 00:11:49.591 "write": true, 00:11:49.591 "unmap": true, 00:11:49.591 "flush": true, 00:11:49.591 "reset": true, 00:11:49.591 "nvme_admin": false, 00:11:49.591 "nvme_io": false, 00:11:49.591 "nvme_io_md": false, 00:11:49.591 "write_zeroes": true, 00:11:49.591 "zcopy": false, 00:11:49.591 "get_zone_info": false, 00:11:49.591 "zone_management": false, 00:11:49.591 "zone_append": false, 00:11:49.591 "compare": false, 00:11:49.591 "compare_and_write": false, 00:11:49.591 "abort": false, 00:11:49.591 "seek_hole": false, 00:11:49.591 "seek_data": false, 00:11:49.591 "copy": false, 00:11:49.592 "nvme_iov_md": false 00:11:49.592 }, 00:11:49.592 "memory_domains": [ 00:11:49.592 { 00:11:49.592 "dma_device_id": "system", 00:11:49.592 "dma_device_type": 1 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.592 "dma_device_type": 2 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "system", 00:11:49.592 "dma_device_type": 1 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.592 "dma_device_type": 2 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "system", 00:11:49.592 "dma_device_type": 1 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.592 "dma_device_type": 2 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "system", 00:11:49.592 "dma_device_type": 1 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.592 "dma_device_type": 2 00:11:49.592 } 00:11:49.592 ], 00:11:49.592 "driver_specific": { 00:11:49.592 "raid": { 00:11:49.592 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:49.592 "strip_size_kb": 64, 00:11:49.592 "state": "online", 00:11:49.592 "raid_level": "concat", 00:11:49.592 "superblock": true, 00:11:49.592 "num_base_bdevs": 4, 00:11:49.592 "num_base_bdevs_discovered": 4, 00:11:49.592 "num_base_bdevs_operational": 4, 00:11:49.592 "base_bdevs_list": [ 00:11:49.592 { 00:11:49.592 "name": "pt1", 00:11:49.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.592 "is_configured": true, 00:11:49.592 "data_offset": 2048, 00:11:49.592 "data_size": 63488 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "name": "pt2", 00:11:49.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.592 "is_configured": true, 00:11:49.592 "data_offset": 2048, 00:11:49.592 "data_size": 63488 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "name": "pt3", 00:11:49.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.592 "is_configured": true, 00:11:49.592 "data_offset": 2048, 00:11:49.592 "data_size": 63488 00:11:49.592 }, 00:11:49.592 { 00:11:49.592 "name": "pt4", 00:11:49.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.592 "is_configured": true, 00:11:49.592 "data_offset": 2048, 00:11:49.592 "data_size": 63488 00:11:49.592 } 00:11:49.592 ] 00:11:49.592 } 00:11:49.592 } 00:11:49.592 }' 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:49.592 pt2 00:11:49.592 pt3 00:11:49.592 pt4' 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.592 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.850 10:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:49.851 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.851 10:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.851 [2024-11-19 10:06:03.978366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=47f95ee0-c68a-4937-8005-c9f69b68f367 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 47f95ee0-c68a-4937-8005-c9f69b68f367 ']' 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.851 [2024-11-19 10:06:04.026031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.851 [2024-11-19 10:06:04.026069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.851 [2024-11-19 10:06:04.026193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.851 [2024-11-19 10:06:04.026299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.851 [2024-11-19 10:06:04.026325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.851 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 [2024-11-19 10:06:04.182073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:50.110 [2024-11-19 10:06:04.184766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:50.110 [2024-11-19 10:06:04.184859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:50.110 [2024-11-19 10:06:04.184931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:50.110 [2024-11-19 10:06:04.185013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:50.110 [2024-11-19 10:06:04.185094] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:50.110 [2024-11-19 10:06:04.185131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:50.111 [2024-11-19 10:06:04.185165] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:50.111 [2024-11-19 10:06:04.185190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.111 [2024-11-19 10:06:04.185209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:50.111 request: 00:11:50.111 { 00:11:50.111 "name": "raid_bdev1", 00:11:50.111 "raid_level": "concat", 00:11:50.111 "base_bdevs": [ 00:11:50.111 "malloc1", 00:11:50.111 "malloc2", 00:11:50.111 "malloc3", 00:11:50.111 "malloc4" 00:11:50.111 ], 00:11:50.111 "strip_size_kb": 64, 00:11:50.111 "superblock": false, 00:11:50.111 "method": "bdev_raid_create", 00:11:50.111 "req_id": 1 00:11:50.111 } 00:11:50.111 Got JSON-RPC error response 00:11:50.111 response: 00:11:50.111 { 00:11:50.111 "code": -17, 00:11:50.111 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:50.111 } 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 [2024-11-19 10:06:04.242059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.111 [2024-11-19 10:06:04.242247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.111 [2024-11-19 10:06:04.242419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.111 [2024-11-19 10:06:04.242538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.111 [2024-11-19 10:06:04.245705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.111 [2024-11-19 10:06:04.245892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.111 [2024-11-19 10:06:04.246098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:50.111 [2024-11-19 10:06:04.246283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.111 pt1 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.111 "name": "raid_bdev1", 00:11:50.111 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:50.111 "strip_size_kb": 64, 00:11:50.111 "state": "configuring", 00:11:50.111 "raid_level": "concat", 00:11:50.111 "superblock": true, 00:11:50.111 "num_base_bdevs": 4, 00:11:50.111 "num_base_bdevs_discovered": 1, 00:11:50.111 "num_base_bdevs_operational": 4, 00:11:50.111 "base_bdevs_list": [ 00:11:50.111 { 00:11:50.111 "name": "pt1", 00:11:50.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.111 "is_configured": true, 00:11:50.111 "data_offset": 2048, 00:11:50.111 "data_size": 63488 00:11:50.111 }, 00:11:50.111 { 00:11:50.111 "name": null, 00:11:50.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.111 "is_configured": false, 00:11:50.111 "data_offset": 2048, 00:11:50.111 "data_size": 63488 00:11:50.111 }, 00:11:50.111 { 00:11:50.111 "name": null, 00:11:50.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.111 "is_configured": false, 00:11:50.111 "data_offset": 2048, 00:11:50.111 "data_size": 63488 00:11:50.111 }, 00:11:50.111 { 00:11:50.111 "name": null, 00:11:50.111 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.111 "is_configured": false, 00:11:50.111 "data_offset": 2048, 00:11:50.111 "data_size": 63488 00:11:50.111 } 00:11:50.111 ] 00:11:50.111 }' 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.111 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.679 [2024-11-19 10:06:04.778399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.679 [2024-11-19 10:06:04.778541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.679 [2024-11-19 10:06:04.778577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:50.679 [2024-11-19 10:06:04.778597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.679 [2024-11-19 10:06:04.779247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.679 [2024-11-19 10:06:04.779296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.679 [2024-11-19 10:06:04.779415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:50.679 [2024-11-19 10:06:04.779456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.679 pt2 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.679 [2024-11-19 10:06:04.786422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.679 "name": "raid_bdev1", 00:11:50.679 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:50.679 "strip_size_kb": 64, 00:11:50.679 "state": "configuring", 00:11:50.679 "raid_level": "concat", 00:11:50.679 "superblock": true, 00:11:50.679 "num_base_bdevs": 4, 00:11:50.679 "num_base_bdevs_discovered": 1, 00:11:50.679 "num_base_bdevs_operational": 4, 00:11:50.679 "base_bdevs_list": [ 00:11:50.679 { 00:11:50.679 "name": "pt1", 00:11:50.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.679 "is_configured": true, 00:11:50.679 "data_offset": 2048, 00:11:50.679 "data_size": 63488 00:11:50.679 }, 00:11:50.679 { 00:11:50.679 "name": null, 00:11:50.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.679 "is_configured": false, 00:11:50.679 "data_offset": 0, 00:11:50.679 "data_size": 63488 00:11:50.679 }, 00:11:50.679 { 00:11:50.679 "name": null, 00:11:50.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.679 "is_configured": false, 00:11:50.679 "data_offset": 2048, 00:11:50.679 "data_size": 63488 00:11:50.679 }, 00:11:50.679 { 00:11:50.679 "name": null, 00:11:50.679 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.679 "is_configured": false, 00:11:50.679 "data_offset": 2048, 00:11:50.679 "data_size": 63488 00:11:50.679 } 00:11:50.679 ] 00:11:50.679 }' 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.679 10:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.246 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:51.246 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.246 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.246 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.246 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-11-19 10:06:05.322587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.247 [2024-11-19 10:06:05.322888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.247 [2024-11-19 10:06:05.322938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:51.247 [2024-11-19 10:06:05.322957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.247 [2024-11-19 10:06:05.323619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.247 [2024-11-19 10:06:05.323651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.247 [2024-11-19 10:06:05.323774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:51.247 [2024-11-19 10:06:05.323830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.247 pt2 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-11-19 10:06:05.334593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:51.247 [2024-11-19 10:06:05.334697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.247 [2024-11-19 10:06:05.334740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:51.247 [2024-11-19 10:06:05.334758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.247 [2024-11-19 10:06:05.335409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.247 [2024-11-19 10:06:05.335450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:51.247 [2024-11-19 10:06:05.335575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:51.247 [2024-11-19 10:06:05.335611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:51.247 pt3 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-11-19 10:06:05.346527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:51.247 [2024-11-19 10:06:05.346624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.247 [2024-11-19 10:06:05.346661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:51.247 [2024-11-19 10:06:05.346676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.247 [2024-11-19 10:06:05.347444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.247 [2024-11-19 10:06:05.347487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:51.247 [2024-11-19 10:06:05.347607] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:51.247 [2024-11-19 10:06:05.347642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:51.247 [2024-11-19 10:06:05.347873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.247 [2024-11-19 10:06:05.347903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:51.247 [2024-11-19 10:06:05.348223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:51.247 [2024-11-19 10:06:05.348472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.247 [2024-11-19 10:06:05.348500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:51.247 [2024-11-19 10:06:05.348673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.247 pt4 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.247 "name": "raid_bdev1", 00:11:51.247 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:51.247 "strip_size_kb": 64, 00:11:51.247 "state": "online", 00:11:51.247 "raid_level": "concat", 00:11:51.247 "superblock": true, 00:11:51.247 "num_base_bdevs": 4, 00:11:51.247 "num_base_bdevs_discovered": 4, 00:11:51.247 "num_base_bdevs_operational": 4, 00:11:51.247 "base_bdevs_list": [ 00:11:51.247 { 00:11:51.247 "name": "pt1", 00:11:51.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.247 "is_configured": true, 00:11:51.247 "data_offset": 2048, 00:11:51.247 "data_size": 63488 00:11:51.247 }, 00:11:51.247 { 00:11:51.247 "name": "pt2", 00:11:51.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.247 "is_configured": true, 00:11:51.247 "data_offset": 2048, 00:11:51.247 "data_size": 63488 00:11:51.247 }, 00:11:51.247 { 00:11:51.247 "name": "pt3", 00:11:51.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.247 "is_configured": true, 00:11:51.247 "data_offset": 2048, 00:11:51.247 "data_size": 63488 00:11:51.247 }, 00:11:51.247 { 00:11:51.247 "name": "pt4", 00:11:51.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.247 "is_configured": true, 00:11:51.247 "data_offset": 2048, 00:11:51.247 "data_size": 63488 00:11:51.247 } 00:11:51.247 ] 00:11:51.247 }' 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.247 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.815 [2024-11-19 10:06:05.891177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.815 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.815 "name": "raid_bdev1", 00:11:51.815 "aliases": [ 00:11:51.815 "47f95ee0-c68a-4937-8005-c9f69b68f367" 00:11:51.815 ], 00:11:51.815 "product_name": "Raid Volume", 00:11:51.815 "block_size": 512, 00:11:51.815 "num_blocks": 253952, 00:11:51.815 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:51.815 "assigned_rate_limits": { 00:11:51.815 "rw_ios_per_sec": 0, 00:11:51.815 "rw_mbytes_per_sec": 0, 00:11:51.815 "r_mbytes_per_sec": 0, 00:11:51.815 "w_mbytes_per_sec": 0 00:11:51.815 }, 00:11:51.815 "claimed": false, 00:11:51.815 "zoned": false, 00:11:51.815 "supported_io_types": { 00:11:51.815 "read": true, 00:11:51.815 "write": true, 00:11:51.815 "unmap": true, 00:11:51.815 "flush": true, 00:11:51.815 "reset": true, 00:11:51.815 "nvme_admin": false, 00:11:51.815 "nvme_io": false, 00:11:51.815 "nvme_io_md": false, 00:11:51.815 "write_zeroes": true, 00:11:51.815 "zcopy": false, 00:11:51.815 "get_zone_info": false, 00:11:51.815 "zone_management": false, 00:11:51.815 "zone_append": false, 00:11:51.815 "compare": false, 00:11:51.815 "compare_and_write": false, 00:11:51.815 "abort": false, 00:11:51.815 "seek_hole": false, 00:11:51.815 "seek_data": false, 00:11:51.815 "copy": false, 00:11:51.815 "nvme_iov_md": false 00:11:51.815 }, 00:11:51.815 "memory_domains": [ 00:11:51.815 { 00:11:51.815 "dma_device_id": "system", 00:11:51.815 "dma_device_type": 1 00:11:51.815 }, 00:11:51.815 { 00:11:51.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.815 "dma_device_type": 2 00:11:51.815 }, 00:11:51.815 { 00:11:51.815 "dma_device_id": "system", 00:11:51.815 "dma_device_type": 1 00:11:51.815 }, 00:11:51.815 { 00:11:51.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.815 "dma_device_type": 2 00:11:51.815 }, 00:11:51.815 { 00:11:51.815 "dma_device_id": "system", 00:11:51.815 "dma_device_type": 1 00:11:51.815 }, 00:11:51.815 { 00:11:51.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.815 "dma_device_type": 2 00:11:51.815 }, 00:11:51.815 { 00:11:51.815 "dma_device_id": "system", 00:11:51.815 "dma_device_type": 1 00:11:51.815 }, 00:11:51.815 { 00:11:51.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.816 "dma_device_type": 2 00:11:51.816 } 00:11:51.816 ], 00:11:51.816 "driver_specific": { 00:11:51.816 "raid": { 00:11:51.816 "uuid": "47f95ee0-c68a-4937-8005-c9f69b68f367", 00:11:51.816 "strip_size_kb": 64, 00:11:51.816 "state": "online", 00:11:51.816 "raid_level": "concat", 00:11:51.816 "superblock": true, 00:11:51.816 "num_base_bdevs": 4, 00:11:51.816 "num_base_bdevs_discovered": 4, 00:11:51.816 "num_base_bdevs_operational": 4, 00:11:51.816 "base_bdevs_list": [ 00:11:51.816 { 00:11:51.816 "name": "pt1", 00:11:51.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.816 "is_configured": true, 00:11:51.816 "data_offset": 2048, 00:11:51.816 "data_size": 63488 00:11:51.816 }, 00:11:51.816 { 00:11:51.816 "name": "pt2", 00:11:51.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.816 "is_configured": true, 00:11:51.816 "data_offset": 2048, 00:11:51.816 "data_size": 63488 00:11:51.816 }, 00:11:51.816 { 00:11:51.816 "name": "pt3", 00:11:51.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.816 "is_configured": true, 00:11:51.816 "data_offset": 2048, 00:11:51.816 "data_size": 63488 00:11:51.816 }, 00:11:51.816 { 00:11:51.816 "name": "pt4", 00:11:51.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.816 "is_configured": true, 00:11:51.816 "data_offset": 2048, 00:11:51.816 "data_size": 63488 00:11:51.816 } 00:11:51.816 ] 00:11:51.816 } 00:11:51.816 } 00:11:51.816 }' 00:11:51.816 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.816 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:51.816 pt2 00:11:51.816 pt3 00:11:51.816 pt4' 00:11:51.816 10:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.816 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.816 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 [2024-11-19 10:06:06.275210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.075 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 47f95ee0-c68a-4937-8005-c9f69b68f367 '!=' 47f95ee0-c68a-4937-8005-c9f69b68f367 ']' 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72646 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72646 ']' 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72646 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72646 00:11:52.334 killing process with pid 72646 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72646' 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72646 00:11:52.334 [2024-11-19 10:06:06.354802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.334 10:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72646 00:11:52.334 [2024-11-19 10:06:06.354933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.334 [2024-11-19 10:06:06.355047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.334 [2024-11-19 10:06:06.355064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:52.593 [2024-11-19 10:06:06.752259] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.974 10:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:53.974 ************************************ 00:11:53.974 END TEST raid_superblock_test 00:11:53.974 ************************************ 00:11:53.974 00:11:53.974 real 0m6.192s 00:11:53.974 user 0m9.122s 00:11:53.974 sys 0m1.017s 00:11:53.974 10:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.974 10:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.974 10:06:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:53.974 10:06:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:53.974 10:06:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.974 10:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.974 ************************************ 00:11:53.974 START TEST raid_read_error_test 00:11:53.974 ************************************ 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:53.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mLe7emjl9J 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72916 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72916 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72916 ']' 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.974 10:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.974 [2024-11-19 10:06:08.091563] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:53.974 [2024-11-19 10:06:08.092815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72916 ] 00:11:54.233 [2024-11-19 10:06:08.275559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.233 [2024-11-19 10:06:08.427508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.493 [2024-11-19 10:06:08.663432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.493 [2024-11-19 10:06:08.663654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 BaseBdev1_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 true 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 [2024-11-19 10:06:09.150630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:55.062 [2024-11-19 10:06:09.150761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.062 [2024-11-19 10:06:09.150832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:55.062 [2024-11-19 10:06:09.150867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.062 [2024-11-19 10:06:09.154330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.062 [2024-11-19 10:06:09.154394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.062 BaseBdev1 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 BaseBdev2_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 true 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 [2024-11-19 10:06:09.227772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:55.062 [2024-11-19 10:06:09.227942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.062 [2024-11-19 10:06:09.227985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:55.062 [2024-11-19 10:06:09.228006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.062 [2024-11-19 10:06:09.231595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.062 [2024-11-19 10:06:09.231691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.062 BaseBdev2 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.062 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.322 BaseBdev3_malloc 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.322 true 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.322 [2024-11-19 10:06:09.318121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:55.322 [2024-11-19 10:06:09.318210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.322 [2024-11-19 10:06:09.318248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:55.322 [2024-11-19 10:06:09.318282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.322 [2024-11-19 10:06:09.321706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.322 [2024-11-19 10:06:09.321778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.322 BaseBdev3 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.322 BaseBdev4_malloc 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.322 true 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.322 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.322 [2024-11-19 10:06:09.388661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:55.322 [2024-11-19 10:06:09.388754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.322 [2024-11-19 10:06:09.388802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:55.322 [2024-11-19 10:06:09.388826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.322 [2024-11-19 10:06:09.391840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.322 [2024-11-19 10:06:09.391914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:55.322 BaseBdev4 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.323 [2024-11-19 10:06:09.396767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.323 [2024-11-19 10:06:09.399513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.323 [2024-11-19 10:06:09.399637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.323 [2024-11-19 10:06:09.399751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.323 [2024-11-19 10:06:09.400092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:55.323 [2024-11-19 10:06:09.400114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.323 [2024-11-19 10:06:09.400426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:55.323 [2024-11-19 10:06:09.400654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:55.323 [2024-11-19 10:06:09.400675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:55.323 [2024-11-19 10:06:09.400919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.323 "name": "raid_bdev1", 00:11:55.323 "uuid": "dbdfd49d-88a1-4f40-a9a8-79c1bebfc446", 00:11:55.323 "strip_size_kb": 64, 00:11:55.323 "state": "online", 00:11:55.323 "raid_level": "concat", 00:11:55.323 "superblock": true, 00:11:55.323 "num_base_bdevs": 4, 00:11:55.323 "num_base_bdevs_discovered": 4, 00:11:55.323 "num_base_bdevs_operational": 4, 00:11:55.323 "base_bdevs_list": [ 00:11:55.323 { 00:11:55.323 "name": "BaseBdev1", 00:11:55.323 "uuid": "d8fb4e70-c88c-5c31-b9de-5e55d35bdf68", 00:11:55.323 "is_configured": true, 00:11:55.323 "data_offset": 2048, 00:11:55.323 "data_size": 63488 00:11:55.323 }, 00:11:55.323 { 00:11:55.323 "name": "BaseBdev2", 00:11:55.323 "uuid": "3d9769d2-11aa-5caa-952d-96e32f350ba6", 00:11:55.323 "is_configured": true, 00:11:55.323 "data_offset": 2048, 00:11:55.323 "data_size": 63488 00:11:55.323 }, 00:11:55.323 { 00:11:55.323 "name": "BaseBdev3", 00:11:55.323 "uuid": "d1b05dcf-c1ab-5d6c-ac49-8bc29c019791", 00:11:55.323 "is_configured": true, 00:11:55.323 "data_offset": 2048, 00:11:55.323 "data_size": 63488 00:11:55.323 }, 00:11:55.323 { 00:11:55.323 "name": "BaseBdev4", 00:11:55.323 "uuid": "27ee6396-ebdb-5a52-8d03-ffca4e687023", 00:11:55.323 "is_configured": true, 00:11:55.323 "data_offset": 2048, 00:11:55.323 "data_size": 63488 00:11:55.323 } 00:11:55.323 ] 00:11:55.323 }' 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.323 10:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.892 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:55.892 10:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:55.892 [2024-11-19 10:06:10.042821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.830 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.830 "name": "raid_bdev1", 00:11:56.830 "uuid": "dbdfd49d-88a1-4f40-a9a8-79c1bebfc446", 00:11:56.830 "strip_size_kb": 64, 00:11:56.830 "state": "online", 00:11:56.830 "raid_level": "concat", 00:11:56.830 "superblock": true, 00:11:56.830 "num_base_bdevs": 4, 00:11:56.830 "num_base_bdevs_discovered": 4, 00:11:56.830 "num_base_bdevs_operational": 4, 00:11:56.830 "base_bdevs_list": [ 00:11:56.830 { 00:11:56.830 "name": "BaseBdev1", 00:11:56.830 "uuid": "d8fb4e70-c88c-5c31-b9de-5e55d35bdf68", 00:11:56.830 "is_configured": true, 00:11:56.831 "data_offset": 2048, 00:11:56.831 "data_size": 63488 00:11:56.831 }, 00:11:56.831 { 00:11:56.831 "name": "BaseBdev2", 00:11:56.831 "uuid": "3d9769d2-11aa-5caa-952d-96e32f350ba6", 00:11:56.831 "is_configured": true, 00:11:56.831 "data_offset": 2048, 00:11:56.831 "data_size": 63488 00:11:56.831 }, 00:11:56.831 { 00:11:56.831 "name": "BaseBdev3", 00:11:56.831 "uuid": "d1b05dcf-c1ab-5d6c-ac49-8bc29c019791", 00:11:56.831 "is_configured": true, 00:11:56.831 "data_offset": 2048, 00:11:56.831 "data_size": 63488 00:11:56.831 }, 00:11:56.831 { 00:11:56.831 "name": "BaseBdev4", 00:11:56.831 "uuid": "27ee6396-ebdb-5a52-8d03-ffca4e687023", 00:11:56.831 "is_configured": true, 00:11:56.831 "data_offset": 2048, 00:11:56.831 "data_size": 63488 00:11:56.831 } 00:11:56.831 ] 00:11:56.831 }' 00:11:56.831 10:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.831 10:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.399 [2024-11-19 10:06:11.457439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.399 [2024-11-19 10:06:11.457482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.399 [2024-11-19 10:06:11.461030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.399 [2024-11-19 10:06:11.461114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.399 [2024-11-19 10:06:11.461182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.399 [2024-11-19 10:06:11.461207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:57.399 { 00:11:57.399 "results": [ 00:11:57.399 { 00:11:57.399 "job": "raid_bdev1", 00:11:57.399 "core_mask": "0x1", 00:11:57.399 "workload": "randrw", 00:11:57.399 "percentage": 50, 00:11:57.399 "status": "finished", 00:11:57.399 "queue_depth": 1, 00:11:57.399 "io_size": 131072, 00:11:57.399 "runtime": 1.411728, 00:11:57.399 "iops": 9454.370813641155, 00:11:57.399 "mibps": 1181.7963517051444, 00:11:57.399 "io_failed": 1, 00:11:57.399 "io_timeout": 0, 00:11:57.399 "avg_latency_us": 148.72508894761216, 00:11:57.399 "min_latency_us": 37.00363636363636, 00:11:57.399 "max_latency_us": 1995.8690909090908 00:11:57.399 } 00:11:57.399 ], 00:11:57.399 "core_count": 1 00:11:57.399 } 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72916 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72916 ']' 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72916 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72916 00:11:57.399 killing process with pid 72916 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72916' 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72916 00:11:57.399 [2024-11-19 10:06:11.499786] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.399 10:06:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72916 00:11:57.659 [2024-11-19 10:06:11.816083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mLe7emjl9J 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:59.055 ************************************ 00:11:59.055 END TEST raid_read_error_test 00:11:59.055 ************************************ 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:59.055 00:11:59.055 real 0m5.064s 00:11:59.055 user 0m6.113s 00:11:59.055 sys 0m0.715s 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.055 10:06:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 10:06:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:59.055 10:06:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:59.055 10:06:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.055 10:06:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.055 ************************************ 00:11:59.055 START TEST raid_write_error_test 00:11:59.055 ************************************ 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.055 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LYHnuAXDLM 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73066 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73066 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73066 ']' 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.056 10:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.056 [2024-11-19 10:06:13.186476] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:11:59.056 [2024-11-19 10:06:13.186839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73066 ] 00:11:59.315 [2024-11-19 10:06:13.372371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.573 [2024-11-19 10:06:13.548730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.573 [2024-11-19 10:06:13.794021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.574 [2024-11-19 10:06:13.794119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.143 BaseBdev1_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.143 true 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.143 [2024-11-19 10:06:14.290106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:00.143 [2024-11-19 10:06:14.290223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.143 [2024-11-19 10:06:14.290252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:00.143 [2024-11-19 10:06:14.290275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.143 [2024-11-19 10:06:14.293564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.143 [2024-11-19 10:06:14.293640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.143 BaseBdev1 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.143 BaseBdev2_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.143 true 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.143 [2024-11-19 10:06:14.359912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.143 [2024-11-19 10:06:14.359987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.143 [2024-11-19 10:06:14.360016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:00.143 [2024-11-19 10:06:14.360035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.143 [2024-11-19 10:06:14.363235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.143 [2024-11-19 10:06:14.363297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.143 BaseBdev2 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.143 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 BaseBdev3_malloc 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 true 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 [2024-11-19 10:06:14.439993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.403 [2024-11-19 10:06:14.440206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.403 [2024-11-19 10:06:14.440246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:00.403 [2024-11-19 10:06:14.440266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.403 [2024-11-19 10:06:14.443375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.403 [2024-11-19 10:06:14.443587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.403 BaseBdev3 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 BaseBdev4_malloc 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 true 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 [2024-11-19 10:06:14.511540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:00.403 [2024-11-19 10:06:14.511633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.403 [2024-11-19 10:06:14.511660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:00.403 [2024-11-19 10:06:14.511677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.403 [2024-11-19 10:06:14.514715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.403 [2024-11-19 10:06:14.514778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:00.403 BaseBdev4 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.403 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 [2024-11-19 10:06:14.523691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.404 [2024-11-19 10:06:14.526497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.404 [2024-11-19 10:06:14.526597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.404 [2024-11-19 10:06:14.526704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.404 [2024-11-19 10:06:14.527077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:00.404 [2024-11-19 10:06:14.527098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:00.404 [2024-11-19 10:06:14.527444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:00.404 [2024-11-19 10:06:14.527675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:00.404 [2024-11-19 10:06:14.527708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:00.404 [2024-11-19 10:06:14.527992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.404 "name": "raid_bdev1", 00:12:00.404 "uuid": "650b52a7-be2f-4d12-857c-121ff3f6b73a", 00:12:00.404 "strip_size_kb": 64, 00:12:00.404 "state": "online", 00:12:00.404 "raid_level": "concat", 00:12:00.404 "superblock": true, 00:12:00.404 "num_base_bdevs": 4, 00:12:00.404 "num_base_bdevs_discovered": 4, 00:12:00.404 "num_base_bdevs_operational": 4, 00:12:00.404 "base_bdevs_list": [ 00:12:00.404 { 00:12:00.404 "name": "BaseBdev1", 00:12:00.404 "uuid": "28f5a809-7b31-512c-875e-3325098985a5", 00:12:00.404 "is_configured": true, 00:12:00.404 "data_offset": 2048, 00:12:00.404 "data_size": 63488 00:12:00.404 }, 00:12:00.404 { 00:12:00.404 "name": "BaseBdev2", 00:12:00.404 "uuid": "5af5b78a-c3f5-53c2-8625-17319bcb4948", 00:12:00.404 "is_configured": true, 00:12:00.404 "data_offset": 2048, 00:12:00.404 "data_size": 63488 00:12:00.404 }, 00:12:00.404 { 00:12:00.404 "name": "BaseBdev3", 00:12:00.404 "uuid": "cbd66889-2375-5f15-9be5-77f24e48eb6c", 00:12:00.404 "is_configured": true, 00:12:00.404 "data_offset": 2048, 00:12:00.404 "data_size": 63488 00:12:00.404 }, 00:12:00.404 { 00:12:00.404 "name": "BaseBdev4", 00:12:00.404 "uuid": "09269d90-3b38-5b48-9da6-b7e84a49696c", 00:12:00.404 "is_configured": true, 00:12:00.404 "data_offset": 2048, 00:12:00.404 "data_size": 63488 00:12:00.404 } 00:12:00.404 ] 00:12:00.404 }' 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.404 10:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.971 10:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:00.971 10:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:00.971 [2024-11-19 10:06:15.161856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.907 "name": "raid_bdev1", 00:12:01.907 "uuid": "650b52a7-be2f-4d12-857c-121ff3f6b73a", 00:12:01.907 "strip_size_kb": 64, 00:12:01.907 "state": "online", 00:12:01.907 "raid_level": "concat", 00:12:01.907 "superblock": true, 00:12:01.907 "num_base_bdevs": 4, 00:12:01.907 "num_base_bdevs_discovered": 4, 00:12:01.907 "num_base_bdevs_operational": 4, 00:12:01.907 "base_bdevs_list": [ 00:12:01.907 { 00:12:01.907 "name": "BaseBdev1", 00:12:01.907 "uuid": "28f5a809-7b31-512c-875e-3325098985a5", 00:12:01.907 "is_configured": true, 00:12:01.907 "data_offset": 2048, 00:12:01.907 "data_size": 63488 00:12:01.907 }, 00:12:01.907 { 00:12:01.907 "name": "BaseBdev2", 00:12:01.907 "uuid": "5af5b78a-c3f5-53c2-8625-17319bcb4948", 00:12:01.907 "is_configured": true, 00:12:01.907 "data_offset": 2048, 00:12:01.907 "data_size": 63488 00:12:01.907 }, 00:12:01.907 { 00:12:01.907 "name": "BaseBdev3", 00:12:01.907 "uuid": "cbd66889-2375-5f15-9be5-77f24e48eb6c", 00:12:01.907 "is_configured": true, 00:12:01.907 "data_offset": 2048, 00:12:01.907 "data_size": 63488 00:12:01.907 }, 00:12:01.907 { 00:12:01.907 "name": "BaseBdev4", 00:12:01.907 "uuid": "09269d90-3b38-5b48-9da6-b7e84a49696c", 00:12:01.907 "is_configured": true, 00:12:01.907 "data_offset": 2048, 00:12:01.907 "data_size": 63488 00:12:01.907 } 00:12:01.907 ] 00:12:01.907 }' 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.907 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 [2024-11-19 10:06:16.564663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.475 [2024-11-19 10:06:16.564705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.475 [2024-11-19 10:06:16.568503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.475 [2024-11-19 10:06:16.568647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.475 [2024-11-19 10:06:16.568754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.475 [2024-11-19 10:06:16.568947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 { 00:12:02.475 "results": [ 00:12:02.475 { 00:12:02.475 "job": "raid_bdev1", 00:12:02.475 "core_mask": "0x1", 00:12:02.475 "workload": "randrw", 00:12:02.475 "percentage": 50, 00:12:02.475 "status": "finished", 00:12:02.475 "queue_depth": 1, 00:12:02.475 "io_size": 131072, 00:12:02.475 "runtime": 1.399675, 00:12:02.475 "iops": 9675.817600514405, 00:12:02.475 "mibps": 1209.4772000643006, 00:12:02.475 "io_failed": 1, 00:12:02.475 "io_timeout": 0, 00:12:02.475 "avg_latency_us": 145.54338183966064, 00:12:02.475 "min_latency_us": 38.167272727272724, 00:12:02.475 "max_latency_us": 1869.2654545454545 00:12:02.475 } 00:12:02.475 ], 00:12:02.475 "core_count": 1 00:12:02.475 } 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73066 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73066 ']' 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73066 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73066 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73066' 00:12:02.475 killing process with pid 73066 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73066 00:12:02.475 10:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73066 00:12:02.475 [2024-11-19 10:06:16.608200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.733 [2024-11-19 10:06:16.930456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LYHnuAXDLM 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:04.113 00:12:04.113 real 0m5.092s 00:12:04.113 user 0m6.171s 00:12:04.113 sys 0m0.692s 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.113 10:06:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 ************************************ 00:12:04.113 END TEST raid_write_error_test 00:12:04.113 ************************************ 00:12:04.113 10:06:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:04.113 10:06:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:04.113 10:06:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:04.113 10:06:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.113 10:06:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 ************************************ 00:12:04.113 START TEST raid_state_function_test 00:12:04.113 ************************************ 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:04.113 Process raid pid: 73211 00:12:04.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73211 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73211' 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73211 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73211 ']' 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.113 10:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 [2024-11-19 10:06:18.342502] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:12:04.113 [2024-11-19 10:06:18.342692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.372 [2024-11-19 10:06:18.534880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.631 [2024-11-19 10:06:18.691657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.889 [2024-11-19 10:06:18.932950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.889 [2024-11-19 10:06:18.933007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.146 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.146 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.146 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.147 [2024-11-19 10:06:19.344342] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.147 [2024-11-19 10:06:19.344425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.147 [2024-11-19 10:06:19.344442] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.147 [2024-11-19 10:06:19.344457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.147 [2024-11-19 10:06:19.344466] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.147 [2024-11-19 10:06:19.344480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.147 [2024-11-19 10:06:19.344488] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.147 [2024-11-19 10:06:19.344501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.147 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.405 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.405 "name": "Existed_Raid", 00:12:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.405 "strip_size_kb": 0, 00:12:05.405 "state": "configuring", 00:12:05.405 "raid_level": "raid1", 00:12:05.405 "superblock": false, 00:12:05.405 "num_base_bdevs": 4, 00:12:05.405 "num_base_bdevs_discovered": 0, 00:12:05.405 "num_base_bdevs_operational": 4, 00:12:05.405 "base_bdevs_list": [ 00:12:05.405 { 00:12:05.405 "name": "BaseBdev1", 00:12:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.405 "is_configured": false, 00:12:05.405 "data_offset": 0, 00:12:05.405 "data_size": 0 00:12:05.405 }, 00:12:05.405 { 00:12:05.405 "name": "BaseBdev2", 00:12:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.405 "is_configured": false, 00:12:05.405 "data_offset": 0, 00:12:05.405 "data_size": 0 00:12:05.405 }, 00:12:05.405 { 00:12:05.405 "name": "BaseBdev3", 00:12:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.405 "is_configured": false, 00:12:05.405 "data_offset": 0, 00:12:05.405 "data_size": 0 00:12:05.405 }, 00:12:05.405 { 00:12:05.405 "name": "BaseBdev4", 00:12:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.405 "is_configured": false, 00:12:05.405 "data_offset": 0, 00:12:05.405 "data_size": 0 00:12:05.405 } 00:12:05.405 ] 00:12:05.405 }' 00:12:05.405 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.405 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.664 [2024-11-19 10:06:19.840850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.664 [2024-11-19 10:06:19.840925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.664 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.664 [2024-11-19 10:06:19.848711] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.664 [2024-11-19 10:06:19.848939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.664 [2024-11-19 10:06:19.848967] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.664 [2024-11-19 10:06:19.848986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.664 [2024-11-19 10:06:19.848996] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.665 [2024-11-19 10:06:19.849011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.665 [2024-11-19 10:06:19.849020] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.665 [2024-11-19 10:06:19.849035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.665 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.665 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.665 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.665 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.922 [2024-11-19 10:06:19.899091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.922 BaseBdev1 00:12:05.922 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.922 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:05.922 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:05.922 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.922 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.922 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.923 [ 00:12:05.923 { 00:12:05.923 "name": "BaseBdev1", 00:12:05.923 "aliases": [ 00:12:05.923 "81b189fa-79fd-412a-a52a-34085624db81" 00:12:05.923 ], 00:12:05.923 "product_name": "Malloc disk", 00:12:05.923 "block_size": 512, 00:12:05.923 "num_blocks": 65536, 00:12:05.923 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:05.923 "assigned_rate_limits": { 00:12:05.923 "rw_ios_per_sec": 0, 00:12:05.923 "rw_mbytes_per_sec": 0, 00:12:05.923 "r_mbytes_per_sec": 0, 00:12:05.923 "w_mbytes_per_sec": 0 00:12:05.923 }, 00:12:05.923 "claimed": true, 00:12:05.923 "claim_type": "exclusive_write", 00:12:05.923 "zoned": false, 00:12:05.923 "supported_io_types": { 00:12:05.923 "read": true, 00:12:05.923 "write": true, 00:12:05.923 "unmap": true, 00:12:05.923 "flush": true, 00:12:05.923 "reset": true, 00:12:05.923 "nvme_admin": false, 00:12:05.923 "nvme_io": false, 00:12:05.923 "nvme_io_md": false, 00:12:05.923 "write_zeroes": true, 00:12:05.923 "zcopy": true, 00:12:05.923 "get_zone_info": false, 00:12:05.923 "zone_management": false, 00:12:05.923 "zone_append": false, 00:12:05.923 "compare": false, 00:12:05.923 "compare_and_write": false, 00:12:05.923 "abort": true, 00:12:05.923 "seek_hole": false, 00:12:05.923 "seek_data": false, 00:12:05.923 "copy": true, 00:12:05.923 "nvme_iov_md": false 00:12:05.923 }, 00:12:05.923 "memory_domains": [ 00:12:05.923 { 00:12:05.923 "dma_device_id": "system", 00:12:05.923 "dma_device_type": 1 00:12:05.923 }, 00:12:05.923 { 00:12:05.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.923 "dma_device_type": 2 00:12:05.923 } 00:12:05.923 ], 00:12:05.923 "driver_specific": {} 00:12:05.923 } 00:12:05.923 ] 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.923 "name": "Existed_Raid", 00:12:05.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.923 "strip_size_kb": 0, 00:12:05.923 "state": "configuring", 00:12:05.923 "raid_level": "raid1", 00:12:05.923 "superblock": false, 00:12:05.923 "num_base_bdevs": 4, 00:12:05.923 "num_base_bdevs_discovered": 1, 00:12:05.923 "num_base_bdevs_operational": 4, 00:12:05.923 "base_bdevs_list": [ 00:12:05.923 { 00:12:05.923 "name": "BaseBdev1", 00:12:05.923 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:05.923 "is_configured": true, 00:12:05.923 "data_offset": 0, 00:12:05.923 "data_size": 65536 00:12:05.923 }, 00:12:05.923 { 00:12:05.923 "name": "BaseBdev2", 00:12:05.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.923 "is_configured": false, 00:12:05.923 "data_offset": 0, 00:12:05.923 "data_size": 0 00:12:05.923 }, 00:12:05.923 { 00:12:05.923 "name": "BaseBdev3", 00:12:05.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.923 "is_configured": false, 00:12:05.923 "data_offset": 0, 00:12:05.923 "data_size": 0 00:12:05.923 }, 00:12:05.923 { 00:12:05.923 "name": "BaseBdev4", 00:12:05.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.923 "is_configured": false, 00:12:05.923 "data_offset": 0, 00:12:05.923 "data_size": 0 00:12:05.923 } 00:12:05.923 ] 00:12:05.923 }' 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.923 10:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.490 [2024-11-19 10:06:20.463370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.490 [2024-11-19 10:06:20.463500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.490 [2024-11-19 10:06:20.475469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.490 [2024-11-19 10:06:20.478277] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.490 [2024-11-19 10:06:20.478498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.490 [2024-11-19 10:06:20.478528] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.490 [2024-11-19 10:06:20.478548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.490 [2024-11-19 10:06:20.478559] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:06.490 [2024-11-19 10:06:20.478572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.490 "name": "Existed_Raid", 00:12:06.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.490 "strip_size_kb": 0, 00:12:06.490 "state": "configuring", 00:12:06.490 "raid_level": "raid1", 00:12:06.490 "superblock": false, 00:12:06.490 "num_base_bdevs": 4, 00:12:06.490 "num_base_bdevs_discovered": 1, 00:12:06.490 "num_base_bdevs_operational": 4, 00:12:06.490 "base_bdevs_list": [ 00:12:06.490 { 00:12:06.490 "name": "BaseBdev1", 00:12:06.490 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:06.490 "is_configured": true, 00:12:06.490 "data_offset": 0, 00:12:06.490 "data_size": 65536 00:12:06.490 }, 00:12:06.490 { 00:12:06.490 "name": "BaseBdev2", 00:12:06.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.490 "is_configured": false, 00:12:06.490 "data_offset": 0, 00:12:06.490 "data_size": 0 00:12:06.490 }, 00:12:06.490 { 00:12:06.490 "name": "BaseBdev3", 00:12:06.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.490 "is_configured": false, 00:12:06.490 "data_offset": 0, 00:12:06.490 "data_size": 0 00:12:06.490 }, 00:12:06.490 { 00:12:06.490 "name": "BaseBdev4", 00:12:06.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.490 "is_configured": false, 00:12:06.490 "data_offset": 0, 00:12:06.490 "data_size": 0 00:12:06.490 } 00:12:06.490 ] 00:12:06.490 }' 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.490 10:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.097 [2024-11-19 10:06:21.058881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.097 BaseBdev2 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.097 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.098 [ 00:12:07.098 { 00:12:07.098 "name": "BaseBdev2", 00:12:07.098 "aliases": [ 00:12:07.098 "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5" 00:12:07.098 ], 00:12:07.098 "product_name": "Malloc disk", 00:12:07.098 "block_size": 512, 00:12:07.098 "num_blocks": 65536, 00:12:07.098 "uuid": "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5", 00:12:07.098 "assigned_rate_limits": { 00:12:07.098 "rw_ios_per_sec": 0, 00:12:07.098 "rw_mbytes_per_sec": 0, 00:12:07.098 "r_mbytes_per_sec": 0, 00:12:07.098 "w_mbytes_per_sec": 0 00:12:07.098 }, 00:12:07.098 "claimed": true, 00:12:07.098 "claim_type": "exclusive_write", 00:12:07.098 "zoned": false, 00:12:07.098 "supported_io_types": { 00:12:07.098 "read": true, 00:12:07.098 "write": true, 00:12:07.098 "unmap": true, 00:12:07.098 "flush": true, 00:12:07.098 "reset": true, 00:12:07.098 "nvme_admin": false, 00:12:07.098 "nvme_io": false, 00:12:07.098 "nvme_io_md": false, 00:12:07.098 "write_zeroes": true, 00:12:07.098 "zcopy": true, 00:12:07.098 "get_zone_info": false, 00:12:07.098 "zone_management": false, 00:12:07.098 "zone_append": false, 00:12:07.098 "compare": false, 00:12:07.098 "compare_and_write": false, 00:12:07.098 "abort": true, 00:12:07.098 "seek_hole": false, 00:12:07.098 "seek_data": false, 00:12:07.098 "copy": true, 00:12:07.098 "nvme_iov_md": false 00:12:07.098 }, 00:12:07.098 "memory_domains": [ 00:12:07.098 { 00:12:07.098 "dma_device_id": "system", 00:12:07.098 "dma_device_type": 1 00:12:07.098 }, 00:12:07.098 { 00:12:07.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.098 "dma_device_type": 2 00:12:07.098 } 00:12:07.098 ], 00:12:07.098 "driver_specific": {} 00:12:07.098 } 00:12:07.098 ] 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.098 "name": "Existed_Raid", 00:12:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.098 "strip_size_kb": 0, 00:12:07.098 "state": "configuring", 00:12:07.098 "raid_level": "raid1", 00:12:07.098 "superblock": false, 00:12:07.098 "num_base_bdevs": 4, 00:12:07.098 "num_base_bdevs_discovered": 2, 00:12:07.098 "num_base_bdevs_operational": 4, 00:12:07.098 "base_bdevs_list": [ 00:12:07.098 { 00:12:07.098 "name": "BaseBdev1", 00:12:07.098 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:07.098 "is_configured": true, 00:12:07.098 "data_offset": 0, 00:12:07.098 "data_size": 65536 00:12:07.098 }, 00:12:07.098 { 00:12:07.098 "name": "BaseBdev2", 00:12:07.098 "uuid": "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5", 00:12:07.098 "is_configured": true, 00:12:07.098 "data_offset": 0, 00:12:07.098 "data_size": 65536 00:12:07.098 }, 00:12:07.098 { 00:12:07.098 "name": "BaseBdev3", 00:12:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.098 "is_configured": false, 00:12:07.098 "data_offset": 0, 00:12:07.098 "data_size": 0 00:12:07.098 }, 00:12:07.098 { 00:12:07.098 "name": "BaseBdev4", 00:12:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.098 "is_configured": false, 00:12:07.098 "data_offset": 0, 00:12:07.098 "data_size": 0 00:12:07.098 } 00:12:07.098 ] 00:12:07.098 }' 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.098 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.664 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 [2024-11-19 10:06:21.669156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.665 BaseBdev3 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 [ 00:12:07.665 { 00:12:07.665 "name": "BaseBdev3", 00:12:07.665 "aliases": [ 00:12:07.665 "eebd26b1-9982-45fb-934a-3fb75abb5873" 00:12:07.665 ], 00:12:07.665 "product_name": "Malloc disk", 00:12:07.665 "block_size": 512, 00:12:07.665 "num_blocks": 65536, 00:12:07.665 "uuid": "eebd26b1-9982-45fb-934a-3fb75abb5873", 00:12:07.665 "assigned_rate_limits": { 00:12:07.665 "rw_ios_per_sec": 0, 00:12:07.665 "rw_mbytes_per_sec": 0, 00:12:07.665 "r_mbytes_per_sec": 0, 00:12:07.665 "w_mbytes_per_sec": 0 00:12:07.665 }, 00:12:07.665 "claimed": true, 00:12:07.665 "claim_type": "exclusive_write", 00:12:07.665 "zoned": false, 00:12:07.665 "supported_io_types": { 00:12:07.665 "read": true, 00:12:07.665 "write": true, 00:12:07.665 "unmap": true, 00:12:07.665 "flush": true, 00:12:07.665 "reset": true, 00:12:07.665 "nvme_admin": false, 00:12:07.665 "nvme_io": false, 00:12:07.665 "nvme_io_md": false, 00:12:07.665 "write_zeroes": true, 00:12:07.665 "zcopy": true, 00:12:07.665 "get_zone_info": false, 00:12:07.665 "zone_management": false, 00:12:07.665 "zone_append": false, 00:12:07.665 "compare": false, 00:12:07.665 "compare_and_write": false, 00:12:07.665 "abort": true, 00:12:07.665 "seek_hole": false, 00:12:07.665 "seek_data": false, 00:12:07.665 "copy": true, 00:12:07.665 "nvme_iov_md": false 00:12:07.665 }, 00:12:07.665 "memory_domains": [ 00:12:07.665 { 00:12:07.665 "dma_device_id": "system", 00:12:07.665 "dma_device_type": 1 00:12:07.665 }, 00:12:07.665 { 00:12:07.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.665 "dma_device_type": 2 00:12:07.665 } 00:12:07.665 ], 00:12:07.665 "driver_specific": {} 00:12:07.665 } 00:12:07.665 ] 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.665 "name": "Existed_Raid", 00:12:07.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.665 "strip_size_kb": 0, 00:12:07.665 "state": "configuring", 00:12:07.665 "raid_level": "raid1", 00:12:07.665 "superblock": false, 00:12:07.665 "num_base_bdevs": 4, 00:12:07.665 "num_base_bdevs_discovered": 3, 00:12:07.665 "num_base_bdevs_operational": 4, 00:12:07.665 "base_bdevs_list": [ 00:12:07.665 { 00:12:07.665 "name": "BaseBdev1", 00:12:07.665 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:07.665 "is_configured": true, 00:12:07.665 "data_offset": 0, 00:12:07.665 "data_size": 65536 00:12:07.665 }, 00:12:07.665 { 00:12:07.665 "name": "BaseBdev2", 00:12:07.665 "uuid": "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5", 00:12:07.665 "is_configured": true, 00:12:07.665 "data_offset": 0, 00:12:07.665 "data_size": 65536 00:12:07.665 }, 00:12:07.665 { 00:12:07.665 "name": "BaseBdev3", 00:12:07.665 "uuid": "eebd26b1-9982-45fb-934a-3fb75abb5873", 00:12:07.665 "is_configured": true, 00:12:07.665 "data_offset": 0, 00:12:07.665 "data_size": 65536 00:12:07.665 }, 00:12:07.665 { 00:12:07.665 "name": "BaseBdev4", 00:12:07.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.665 "is_configured": false, 00:12:07.665 "data_offset": 0, 00:12:07.665 "data_size": 0 00:12:07.665 } 00:12:07.665 ] 00:12:07.665 }' 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.665 10:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.232 [2024-11-19 10:06:22.294083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.232 [2024-11-19 10:06:22.294194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.232 [2024-11-19 10:06:22.294212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:08.232 [2024-11-19 10:06:22.294628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:08.232 [2024-11-19 10:06:22.294913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.232 [2024-11-19 10:06:22.294938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:08.232 [2024-11-19 10:06:22.295318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.232 BaseBdev4 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.232 [ 00:12:08.232 { 00:12:08.232 "name": "BaseBdev4", 00:12:08.232 "aliases": [ 00:12:08.232 "ea71456b-3736-4b55-a24e-f2f8fc255162" 00:12:08.232 ], 00:12:08.232 "product_name": "Malloc disk", 00:12:08.232 "block_size": 512, 00:12:08.232 "num_blocks": 65536, 00:12:08.232 "uuid": "ea71456b-3736-4b55-a24e-f2f8fc255162", 00:12:08.232 "assigned_rate_limits": { 00:12:08.232 "rw_ios_per_sec": 0, 00:12:08.232 "rw_mbytes_per_sec": 0, 00:12:08.232 "r_mbytes_per_sec": 0, 00:12:08.232 "w_mbytes_per_sec": 0 00:12:08.232 }, 00:12:08.232 "claimed": true, 00:12:08.232 "claim_type": "exclusive_write", 00:12:08.232 "zoned": false, 00:12:08.232 "supported_io_types": { 00:12:08.232 "read": true, 00:12:08.232 "write": true, 00:12:08.232 "unmap": true, 00:12:08.232 "flush": true, 00:12:08.232 "reset": true, 00:12:08.232 "nvme_admin": false, 00:12:08.232 "nvme_io": false, 00:12:08.232 "nvme_io_md": false, 00:12:08.232 "write_zeroes": true, 00:12:08.232 "zcopy": true, 00:12:08.232 "get_zone_info": false, 00:12:08.232 "zone_management": false, 00:12:08.232 "zone_append": false, 00:12:08.232 "compare": false, 00:12:08.232 "compare_and_write": false, 00:12:08.232 "abort": true, 00:12:08.232 "seek_hole": false, 00:12:08.232 "seek_data": false, 00:12:08.232 "copy": true, 00:12:08.232 "nvme_iov_md": false 00:12:08.232 }, 00:12:08.232 "memory_domains": [ 00:12:08.232 { 00:12:08.232 "dma_device_id": "system", 00:12:08.232 "dma_device_type": 1 00:12:08.232 }, 00:12:08.232 { 00:12:08.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.232 "dma_device_type": 2 00:12:08.232 } 00:12:08.232 ], 00:12:08.232 "driver_specific": {} 00:12:08.232 } 00:12:08.232 ] 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.232 "name": "Existed_Raid", 00:12:08.232 "uuid": "705d4fd3-c45c-445b-9560-0515b9e4dc98", 00:12:08.232 "strip_size_kb": 0, 00:12:08.232 "state": "online", 00:12:08.232 "raid_level": "raid1", 00:12:08.232 "superblock": false, 00:12:08.232 "num_base_bdevs": 4, 00:12:08.232 "num_base_bdevs_discovered": 4, 00:12:08.232 "num_base_bdevs_operational": 4, 00:12:08.232 "base_bdevs_list": [ 00:12:08.232 { 00:12:08.232 "name": "BaseBdev1", 00:12:08.232 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:08.232 "is_configured": true, 00:12:08.232 "data_offset": 0, 00:12:08.232 "data_size": 65536 00:12:08.232 }, 00:12:08.232 { 00:12:08.232 "name": "BaseBdev2", 00:12:08.232 "uuid": "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5", 00:12:08.232 "is_configured": true, 00:12:08.232 "data_offset": 0, 00:12:08.232 "data_size": 65536 00:12:08.232 }, 00:12:08.232 { 00:12:08.232 "name": "BaseBdev3", 00:12:08.232 "uuid": "eebd26b1-9982-45fb-934a-3fb75abb5873", 00:12:08.232 "is_configured": true, 00:12:08.232 "data_offset": 0, 00:12:08.232 "data_size": 65536 00:12:08.232 }, 00:12:08.232 { 00:12:08.232 "name": "BaseBdev4", 00:12:08.232 "uuid": "ea71456b-3736-4b55-a24e-f2f8fc255162", 00:12:08.232 "is_configured": true, 00:12:08.232 "data_offset": 0, 00:12:08.232 "data_size": 65536 00:12:08.232 } 00:12:08.232 ] 00:12:08.232 }' 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.232 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.800 [2024-11-19 10:06:22.862786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.800 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.800 "name": "Existed_Raid", 00:12:08.800 "aliases": [ 00:12:08.800 "705d4fd3-c45c-445b-9560-0515b9e4dc98" 00:12:08.800 ], 00:12:08.800 "product_name": "Raid Volume", 00:12:08.800 "block_size": 512, 00:12:08.800 "num_blocks": 65536, 00:12:08.800 "uuid": "705d4fd3-c45c-445b-9560-0515b9e4dc98", 00:12:08.800 "assigned_rate_limits": { 00:12:08.800 "rw_ios_per_sec": 0, 00:12:08.800 "rw_mbytes_per_sec": 0, 00:12:08.800 "r_mbytes_per_sec": 0, 00:12:08.800 "w_mbytes_per_sec": 0 00:12:08.800 }, 00:12:08.800 "claimed": false, 00:12:08.800 "zoned": false, 00:12:08.800 "supported_io_types": { 00:12:08.800 "read": true, 00:12:08.800 "write": true, 00:12:08.800 "unmap": false, 00:12:08.800 "flush": false, 00:12:08.800 "reset": true, 00:12:08.800 "nvme_admin": false, 00:12:08.800 "nvme_io": false, 00:12:08.800 "nvme_io_md": false, 00:12:08.800 "write_zeroes": true, 00:12:08.800 "zcopy": false, 00:12:08.800 "get_zone_info": false, 00:12:08.800 "zone_management": false, 00:12:08.800 "zone_append": false, 00:12:08.800 "compare": false, 00:12:08.800 "compare_and_write": false, 00:12:08.800 "abort": false, 00:12:08.800 "seek_hole": false, 00:12:08.800 "seek_data": false, 00:12:08.800 "copy": false, 00:12:08.800 "nvme_iov_md": false 00:12:08.800 }, 00:12:08.800 "memory_domains": [ 00:12:08.800 { 00:12:08.800 "dma_device_id": "system", 00:12:08.800 "dma_device_type": 1 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.800 "dma_device_type": 2 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "system", 00:12:08.800 "dma_device_type": 1 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.800 "dma_device_type": 2 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "system", 00:12:08.800 "dma_device_type": 1 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.800 "dma_device_type": 2 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "system", 00:12:08.800 "dma_device_type": 1 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.800 "dma_device_type": 2 00:12:08.800 } 00:12:08.800 ], 00:12:08.800 "driver_specific": { 00:12:08.800 "raid": { 00:12:08.800 "uuid": "705d4fd3-c45c-445b-9560-0515b9e4dc98", 00:12:08.800 "strip_size_kb": 0, 00:12:08.800 "state": "online", 00:12:08.800 "raid_level": "raid1", 00:12:08.800 "superblock": false, 00:12:08.800 "num_base_bdevs": 4, 00:12:08.800 "num_base_bdevs_discovered": 4, 00:12:08.800 "num_base_bdevs_operational": 4, 00:12:08.800 "base_bdevs_list": [ 00:12:08.800 { 00:12:08.800 "name": "BaseBdev1", 00:12:08.800 "uuid": "81b189fa-79fd-412a-a52a-34085624db81", 00:12:08.800 "is_configured": true, 00:12:08.800 "data_offset": 0, 00:12:08.800 "data_size": 65536 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "name": "BaseBdev2", 00:12:08.800 "uuid": "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5", 00:12:08.800 "is_configured": true, 00:12:08.800 "data_offset": 0, 00:12:08.800 "data_size": 65536 00:12:08.800 }, 00:12:08.800 { 00:12:08.800 "name": "BaseBdev3", 00:12:08.800 "uuid": "eebd26b1-9982-45fb-934a-3fb75abb5873", 00:12:08.800 "is_configured": true, 00:12:08.800 "data_offset": 0, 00:12:08.800 "data_size": 65536 00:12:08.800 }, 00:12:08.800 { 00:12:08.801 "name": "BaseBdev4", 00:12:08.801 "uuid": "ea71456b-3736-4b55-a24e-f2f8fc255162", 00:12:08.801 "is_configured": true, 00:12:08.801 "data_offset": 0, 00:12:08.801 "data_size": 65536 00:12:08.801 } 00:12:08.801 ] 00:12:08.801 } 00:12:08.801 } 00:12:08.801 }' 00:12:08.801 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.801 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:08.801 BaseBdev2 00:12:08.801 BaseBdev3 00:12:08.801 BaseBdev4' 00:12:08.801 10:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.801 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.801 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.801 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.801 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:08.801 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.801 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.060 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 [2024-11-19 10:06:23.242551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.319 "name": "Existed_Raid", 00:12:09.319 "uuid": "705d4fd3-c45c-445b-9560-0515b9e4dc98", 00:12:09.319 "strip_size_kb": 0, 00:12:09.319 "state": "online", 00:12:09.319 "raid_level": "raid1", 00:12:09.319 "superblock": false, 00:12:09.319 "num_base_bdevs": 4, 00:12:09.319 "num_base_bdevs_discovered": 3, 00:12:09.319 "num_base_bdevs_operational": 3, 00:12:09.319 "base_bdevs_list": [ 00:12:09.319 { 00:12:09.319 "name": null, 00:12:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.319 "is_configured": false, 00:12:09.319 "data_offset": 0, 00:12:09.319 "data_size": 65536 00:12:09.319 }, 00:12:09.319 { 00:12:09.319 "name": "BaseBdev2", 00:12:09.319 "uuid": "06ed6a74-f453-4a26-a1cb-93dcfca5c8b5", 00:12:09.319 "is_configured": true, 00:12:09.319 "data_offset": 0, 00:12:09.319 "data_size": 65536 00:12:09.319 }, 00:12:09.319 { 00:12:09.319 "name": "BaseBdev3", 00:12:09.319 "uuid": "eebd26b1-9982-45fb-934a-3fb75abb5873", 00:12:09.319 "is_configured": true, 00:12:09.319 "data_offset": 0, 00:12:09.319 "data_size": 65536 00:12:09.319 }, 00:12:09.319 { 00:12:09.319 "name": "BaseBdev4", 00:12:09.319 "uuid": "ea71456b-3736-4b55-a24e-f2f8fc255162", 00:12:09.319 "is_configured": true, 00:12:09.319 "data_offset": 0, 00:12:09.319 "data_size": 65536 00:12:09.319 } 00:12:09.319 ] 00:12:09.319 }' 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.319 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.887 10:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.887 [2024-11-19 10:06:23.923317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.887 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.887 [2024-11-19 10:06:24.088511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.146 [2024-11-19 10:06:24.248604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:10.146 [2024-11-19 10:06:24.248760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.146 [2024-11-19 10:06:24.345328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.146 [2024-11-19 10:06:24.345482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.146 [2024-11-19 10:06:24.345564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:10.146 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.147 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.406 BaseBdev2 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.406 [ 00:12:10.406 { 00:12:10.406 "name": "BaseBdev2", 00:12:10.406 "aliases": [ 00:12:10.406 "3b51ff38-0a07-4a31-be4d-6f9060a47682" 00:12:10.406 ], 00:12:10.406 "product_name": "Malloc disk", 00:12:10.406 "block_size": 512, 00:12:10.406 "num_blocks": 65536, 00:12:10.406 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:10.406 "assigned_rate_limits": { 00:12:10.406 "rw_ios_per_sec": 0, 00:12:10.406 "rw_mbytes_per_sec": 0, 00:12:10.406 "r_mbytes_per_sec": 0, 00:12:10.406 "w_mbytes_per_sec": 0 00:12:10.406 }, 00:12:10.406 "claimed": false, 00:12:10.406 "zoned": false, 00:12:10.406 "supported_io_types": { 00:12:10.406 "read": true, 00:12:10.406 "write": true, 00:12:10.406 "unmap": true, 00:12:10.406 "flush": true, 00:12:10.406 "reset": true, 00:12:10.406 "nvme_admin": false, 00:12:10.406 "nvme_io": false, 00:12:10.406 "nvme_io_md": false, 00:12:10.406 "write_zeroes": true, 00:12:10.406 "zcopy": true, 00:12:10.406 "get_zone_info": false, 00:12:10.406 "zone_management": false, 00:12:10.406 "zone_append": false, 00:12:10.406 "compare": false, 00:12:10.406 "compare_and_write": false, 00:12:10.406 "abort": true, 00:12:10.406 "seek_hole": false, 00:12:10.406 "seek_data": false, 00:12:10.406 "copy": true, 00:12:10.406 "nvme_iov_md": false 00:12:10.406 }, 00:12:10.406 "memory_domains": [ 00:12:10.406 { 00:12:10.406 "dma_device_id": "system", 00:12:10.406 "dma_device_type": 1 00:12:10.406 }, 00:12:10.406 { 00:12:10.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.406 "dma_device_type": 2 00:12:10.406 } 00:12:10.406 ], 00:12:10.406 "driver_specific": {} 00:12:10.406 } 00:12:10.406 ] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.406 BaseBdev3 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.406 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.406 [ 00:12:10.406 { 00:12:10.406 "name": "BaseBdev3", 00:12:10.406 "aliases": [ 00:12:10.406 "94b73f73-2231-4e15-a106-4b9b67cf2ce9" 00:12:10.407 ], 00:12:10.407 "product_name": "Malloc disk", 00:12:10.407 "block_size": 512, 00:12:10.407 "num_blocks": 65536, 00:12:10.407 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:10.407 "assigned_rate_limits": { 00:12:10.407 "rw_ios_per_sec": 0, 00:12:10.407 "rw_mbytes_per_sec": 0, 00:12:10.407 "r_mbytes_per_sec": 0, 00:12:10.407 "w_mbytes_per_sec": 0 00:12:10.407 }, 00:12:10.407 "claimed": false, 00:12:10.407 "zoned": false, 00:12:10.407 "supported_io_types": { 00:12:10.407 "read": true, 00:12:10.407 "write": true, 00:12:10.407 "unmap": true, 00:12:10.407 "flush": true, 00:12:10.407 "reset": true, 00:12:10.407 "nvme_admin": false, 00:12:10.407 "nvme_io": false, 00:12:10.407 "nvme_io_md": false, 00:12:10.407 "write_zeroes": true, 00:12:10.407 "zcopy": true, 00:12:10.407 "get_zone_info": false, 00:12:10.407 "zone_management": false, 00:12:10.407 "zone_append": false, 00:12:10.407 "compare": false, 00:12:10.407 "compare_and_write": false, 00:12:10.407 "abort": true, 00:12:10.407 "seek_hole": false, 00:12:10.407 "seek_data": false, 00:12:10.407 "copy": true, 00:12:10.407 "nvme_iov_md": false 00:12:10.407 }, 00:12:10.407 "memory_domains": [ 00:12:10.407 { 00:12:10.407 "dma_device_id": "system", 00:12:10.407 "dma_device_type": 1 00:12:10.407 }, 00:12:10.407 { 00:12:10.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.407 "dma_device_type": 2 00:12:10.407 } 00:12:10.407 ], 00:12:10.407 "driver_specific": {} 00:12:10.407 } 00:12:10.407 ] 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.407 BaseBdev4 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.407 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.666 [ 00:12:10.666 { 00:12:10.666 "name": "BaseBdev4", 00:12:10.666 "aliases": [ 00:12:10.666 "8cb27492-dd25-4af1-8e8c-15e717b98b9d" 00:12:10.666 ], 00:12:10.666 "product_name": "Malloc disk", 00:12:10.666 "block_size": 512, 00:12:10.666 "num_blocks": 65536, 00:12:10.666 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:10.666 "assigned_rate_limits": { 00:12:10.666 "rw_ios_per_sec": 0, 00:12:10.666 "rw_mbytes_per_sec": 0, 00:12:10.666 "r_mbytes_per_sec": 0, 00:12:10.666 "w_mbytes_per_sec": 0 00:12:10.666 }, 00:12:10.666 "claimed": false, 00:12:10.666 "zoned": false, 00:12:10.666 "supported_io_types": { 00:12:10.666 "read": true, 00:12:10.666 "write": true, 00:12:10.666 "unmap": true, 00:12:10.666 "flush": true, 00:12:10.666 "reset": true, 00:12:10.666 "nvme_admin": false, 00:12:10.666 "nvme_io": false, 00:12:10.666 "nvme_io_md": false, 00:12:10.666 "write_zeroes": true, 00:12:10.666 "zcopy": true, 00:12:10.666 "get_zone_info": false, 00:12:10.666 "zone_management": false, 00:12:10.666 "zone_append": false, 00:12:10.666 "compare": false, 00:12:10.666 "compare_and_write": false, 00:12:10.666 "abort": true, 00:12:10.666 "seek_hole": false, 00:12:10.666 "seek_data": false, 00:12:10.666 "copy": true, 00:12:10.666 "nvme_iov_md": false 00:12:10.666 }, 00:12:10.666 "memory_domains": [ 00:12:10.666 { 00:12:10.666 "dma_device_id": "system", 00:12:10.666 "dma_device_type": 1 00:12:10.666 }, 00:12:10.666 { 00:12:10.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.666 "dma_device_type": 2 00:12:10.666 } 00:12:10.666 ], 00:12:10.666 "driver_specific": {} 00:12:10.666 } 00:12:10.666 ] 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.666 [2024-11-19 10:06:24.659891] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.666 [2024-11-19 10:06:24.659964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.666 [2024-11-19 10:06:24.660005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.666 [2024-11-19 10:06:24.662972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.666 [2024-11-19 10:06:24.663049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.666 "name": "Existed_Raid", 00:12:10.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.666 "strip_size_kb": 0, 00:12:10.666 "state": "configuring", 00:12:10.666 "raid_level": "raid1", 00:12:10.666 "superblock": false, 00:12:10.666 "num_base_bdevs": 4, 00:12:10.666 "num_base_bdevs_discovered": 3, 00:12:10.666 "num_base_bdevs_operational": 4, 00:12:10.666 "base_bdevs_list": [ 00:12:10.666 { 00:12:10.666 "name": "BaseBdev1", 00:12:10.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.666 "is_configured": false, 00:12:10.666 "data_offset": 0, 00:12:10.666 "data_size": 0 00:12:10.666 }, 00:12:10.666 { 00:12:10.666 "name": "BaseBdev2", 00:12:10.666 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:10.666 "is_configured": true, 00:12:10.666 "data_offset": 0, 00:12:10.666 "data_size": 65536 00:12:10.666 }, 00:12:10.666 { 00:12:10.666 "name": "BaseBdev3", 00:12:10.666 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:10.666 "is_configured": true, 00:12:10.666 "data_offset": 0, 00:12:10.666 "data_size": 65536 00:12:10.666 }, 00:12:10.666 { 00:12:10.666 "name": "BaseBdev4", 00:12:10.666 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:10.666 "is_configured": true, 00:12:10.666 "data_offset": 0, 00:12:10.666 "data_size": 65536 00:12:10.666 } 00:12:10.666 ] 00:12:10.666 }' 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.666 10:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.233 [2024-11-19 10:06:25.212080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.233 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.233 "name": "Existed_Raid", 00:12:11.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.233 "strip_size_kb": 0, 00:12:11.234 "state": "configuring", 00:12:11.234 "raid_level": "raid1", 00:12:11.234 "superblock": false, 00:12:11.234 "num_base_bdevs": 4, 00:12:11.234 "num_base_bdevs_discovered": 2, 00:12:11.234 "num_base_bdevs_operational": 4, 00:12:11.234 "base_bdevs_list": [ 00:12:11.234 { 00:12:11.234 "name": "BaseBdev1", 00:12:11.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.234 "is_configured": false, 00:12:11.234 "data_offset": 0, 00:12:11.234 "data_size": 0 00:12:11.234 }, 00:12:11.234 { 00:12:11.234 "name": null, 00:12:11.234 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:11.234 "is_configured": false, 00:12:11.234 "data_offset": 0, 00:12:11.234 "data_size": 65536 00:12:11.234 }, 00:12:11.234 { 00:12:11.234 "name": "BaseBdev3", 00:12:11.234 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:11.234 "is_configured": true, 00:12:11.234 "data_offset": 0, 00:12:11.234 "data_size": 65536 00:12:11.234 }, 00:12:11.234 { 00:12:11.234 "name": "BaseBdev4", 00:12:11.234 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:11.234 "is_configured": true, 00:12:11.234 "data_offset": 0, 00:12:11.234 "data_size": 65536 00:12:11.234 } 00:12:11.234 ] 00:12:11.234 }' 00:12:11.234 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.234 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.492 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.492 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.492 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.492 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.810 [2024-11-19 10:06:25.807686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.810 BaseBdev1 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.810 [ 00:12:11.810 { 00:12:11.810 "name": "BaseBdev1", 00:12:11.810 "aliases": [ 00:12:11.810 "c883728b-e3a5-4a3b-914e-380f6d47f177" 00:12:11.810 ], 00:12:11.810 "product_name": "Malloc disk", 00:12:11.810 "block_size": 512, 00:12:11.810 "num_blocks": 65536, 00:12:11.810 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:11.810 "assigned_rate_limits": { 00:12:11.810 "rw_ios_per_sec": 0, 00:12:11.810 "rw_mbytes_per_sec": 0, 00:12:11.810 "r_mbytes_per_sec": 0, 00:12:11.810 "w_mbytes_per_sec": 0 00:12:11.810 }, 00:12:11.810 "claimed": true, 00:12:11.810 "claim_type": "exclusive_write", 00:12:11.810 "zoned": false, 00:12:11.810 "supported_io_types": { 00:12:11.810 "read": true, 00:12:11.810 "write": true, 00:12:11.810 "unmap": true, 00:12:11.810 "flush": true, 00:12:11.810 "reset": true, 00:12:11.810 "nvme_admin": false, 00:12:11.810 "nvme_io": false, 00:12:11.810 "nvme_io_md": false, 00:12:11.810 "write_zeroes": true, 00:12:11.810 "zcopy": true, 00:12:11.810 "get_zone_info": false, 00:12:11.810 "zone_management": false, 00:12:11.810 "zone_append": false, 00:12:11.810 "compare": false, 00:12:11.810 "compare_and_write": false, 00:12:11.810 "abort": true, 00:12:11.810 "seek_hole": false, 00:12:11.810 "seek_data": false, 00:12:11.810 "copy": true, 00:12:11.810 "nvme_iov_md": false 00:12:11.810 }, 00:12:11.810 "memory_domains": [ 00:12:11.810 { 00:12:11.810 "dma_device_id": "system", 00:12:11.810 "dma_device_type": 1 00:12:11.810 }, 00:12:11.810 { 00:12:11.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.810 "dma_device_type": 2 00:12:11.810 } 00:12:11.810 ], 00:12:11.810 "driver_specific": {} 00:12:11.810 } 00:12:11.810 ] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.810 "name": "Existed_Raid", 00:12:11.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.810 "strip_size_kb": 0, 00:12:11.810 "state": "configuring", 00:12:11.810 "raid_level": "raid1", 00:12:11.810 "superblock": false, 00:12:11.810 "num_base_bdevs": 4, 00:12:11.810 "num_base_bdevs_discovered": 3, 00:12:11.810 "num_base_bdevs_operational": 4, 00:12:11.810 "base_bdevs_list": [ 00:12:11.810 { 00:12:11.810 "name": "BaseBdev1", 00:12:11.810 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:11.810 "is_configured": true, 00:12:11.810 "data_offset": 0, 00:12:11.810 "data_size": 65536 00:12:11.810 }, 00:12:11.810 { 00:12:11.810 "name": null, 00:12:11.810 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:11.810 "is_configured": false, 00:12:11.810 "data_offset": 0, 00:12:11.810 "data_size": 65536 00:12:11.810 }, 00:12:11.810 { 00:12:11.810 "name": "BaseBdev3", 00:12:11.810 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:11.810 "is_configured": true, 00:12:11.810 "data_offset": 0, 00:12:11.810 "data_size": 65536 00:12:11.810 }, 00:12:11.810 { 00:12:11.810 "name": "BaseBdev4", 00:12:11.810 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:11.810 "is_configured": true, 00:12:11.810 "data_offset": 0, 00:12:11.810 "data_size": 65536 00:12:11.810 } 00:12:11.810 ] 00:12:11.810 }' 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.810 10:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.377 [2024-11-19 10:06:26.412060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.377 "name": "Existed_Raid", 00:12:12.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.377 "strip_size_kb": 0, 00:12:12.377 "state": "configuring", 00:12:12.377 "raid_level": "raid1", 00:12:12.377 "superblock": false, 00:12:12.377 "num_base_bdevs": 4, 00:12:12.377 "num_base_bdevs_discovered": 2, 00:12:12.377 "num_base_bdevs_operational": 4, 00:12:12.377 "base_bdevs_list": [ 00:12:12.377 { 00:12:12.377 "name": "BaseBdev1", 00:12:12.377 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:12.377 "is_configured": true, 00:12:12.377 "data_offset": 0, 00:12:12.377 "data_size": 65536 00:12:12.377 }, 00:12:12.377 { 00:12:12.377 "name": null, 00:12:12.377 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:12.377 "is_configured": false, 00:12:12.377 "data_offset": 0, 00:12:12.377 "data_size": 65536 00:12:12.377 }, 00:12:12.377 { 00:12:12.377 "name": null, 00:12:12.377 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:12.377 "is_configured": false, 00:12:12.377 "data_offset": 0, 00:12:12.377 "data_size": 65536 00:12:12.377 }, 00:12:12.377 { 00:12:12.377 "name": "BaseBdev4", 00:12:12.377 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:12.377 "is_configured": true, 00:12:12.377 "data_offset": 0, 00:12:12.377 "data_size": 65536 00:12:12.377 } 00:12:12.377 ] 00:12:12.377 }' 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.377 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.951 [2024-11-19 10:06:26.988130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.951 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.952 10:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.952 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.952 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.952 "name": "Existed_Raid", 00:12:12.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.952 "strip_size_kb": 0, 00:12:12.952 "state": "configuring", 00:12:12.952 "raid_level": "raid1", 00:12:12.952 "superblock": false, 00:12:12.952 "num_base_bdevs": 4, 00:12:12.952 "num_base_bdevs_discovered": 3, 00:12:12.952 "num_base_bdevs_operational": 4, 00:12:12.952 "base_bdevs_list": [ 00:12:12.952 { 00:12:12.952 "name": "BaseBdev1", 00:12:12.952 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:12.952 "is_configured": true, 00:12:12.952 "data_offset": 0, 00:12:12.952 "data_size": 65536 00:12:12.952 }, 00:12:12.952 { 00:12:12.952 "name": null, 00:12:12.952 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:12.952 "is_configured": false, 00:12:12.952 "data_offset": 0, 00:12:12.952 "data_size": 65536 00:12:12.952 }, 00:12:12.952 { 00:12:12.952 "name": "BaseBdev3", 00:12:12.952 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:12.952 "is_configured": true, 00:12:12.952 "data_offset": 0, 00:12:12.952 "data_size": 65536 00:12:12.952 }, 00:12:12.952 { 00:12:12.952 "name": "BaseBdev4", 00:12:12.952 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:12.952 "is_configured": true, 00:12:12.952 "data_offset": 0, 00:12:12.952 "data_size": 65536 00:12:12.952 } 00:12:12.952 ] 00:12:12.952 }' 00:12:12.952 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.952 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.520 [2024-11-19 10:06:27.556422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.520 "name": "Existed_Raid", 00:12:13.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.520 "strip_size_kb": 0, 00:12:13.520 "state": "configuring", 00:12:13.520 "raid_level": "raid1", 00:12:13.520 "superblock": false, 00:12:13.520 "num_base_bdevs": 4, 00:12:13.520 "num_base_bdevs_discovered": 2, 00:12:13.520 "num_base_bdevs_operational": 4, 00:12:13.520 "base_bdevs_list": [ 00:12:13.520 { 00:12:13.520 "name": null, 00:12:13.520 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:13.520 "is_configured": false, 00:12:13.520 "data_offset": 0, 00:12:13.520 "data_size": 65536 00:12:13.520 }, 00:12:13.520 { 00:12:13.520 "name": null, 00:12:13.520 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:13.520 "is_configured": false, 00:12:13.520 "data_offset": 0, 00:12:13.520 "data_size": 65536 00:12:13.520 }, 00:12:13.520 { 00:12:13.520 "name": "BaseBdev3", 00:12:13.520 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:13.520 "is_configured": true, 00:12:13.520 "data_offset": 0, 00:12:13.520 "data_size": 65536 00:12:13.520 }, 00:12:13.520 { 00:12:13.520 "name": "BaseBdev4", 00:12:13.520 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:13.520 "is_configured": true, 00:12:13.520 "data_offset": 0, 00:12:13.520 "data_size": 65536 00:12:13.520 } 00:12:13.520 ] 00:12:13.520 }' 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.520 10:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.087 [2024-11-19 10:06:28.220888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.087 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.088 "name": "Existed_Raid", 00:12:14.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.088 "strip_size_kb": 0, 00:12:14.088 "state": "configuring", 00:12:14.088 "raid_level": "raid1", 00:12:14.088 "superblock": false, 00:12:14.088 "num_base_bdevs": 4, 00:12:14.088 "num_base_bdevs_discovered": 3, 00:12:14.088 "num_base_bdevs_operational": 4, 00:12:14.088 "base_bdevs_list": [ 00:12:14.088 { 00:12:14.088 "name": null, 00:12:14.088 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:14.088 "is_configured": false, 00:12:14.088 "data_offset": 0, 00:12:14.088 "data_size": 65536 00:12:14.088 }, 00:12:14.088 { 00:12:14.088 "name": "BaseBdev2", 00:12:14.088 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:14.088 "is_configured": true, 00:12:14.088 "data_offset": 0, 00:12:14.088 "data_size": 65536 00:12:14.088 }, 00:12:14.088 { 00:12:14.088 "name": "BaseBdev3", 00:12:14.088 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:14.088 "is_configured": true, 00:12:14.088 "data_offset": 0, 00:12:14.088 "data_size": 65536 00:12:14.088 }, 00:12:14.088 { 00:12:14.088 "name": "BaseBdev4", 00:12:14.088 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:14.088 "is_configured": true, 00:12:14.088 "data_offset": 0, 00:12:14.088 "data_size": 65536 00:12:14.088 } 00:12:14.088 ] 00:12:14.088 }' 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.088 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.654 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.655 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c883728b-e3a5-4a3b-914e-380f6d47f177 00:12:14.655 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.655 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.914 [2024-11-19 10:06:28.915500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:14.914 [2024-11-19 10:06:28.915563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:14.914 [2024-11-19 10:06:28.915579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:14.914 [2024-11-19 10:06:28.916005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:14.914 [2024-11-19 10:06:28.916231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:14.914 [2024-11-19 10:06:28.916301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:14.914 [2024-11-19 10:06:28.916635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.914 NewBaseBdev 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.914 [ 00:12:14.914 { 00:12:14.914 "name": "NewBaseBdev", 00:12:14.914 "aliases": [ 00:12:14.914 "c883728b-e3a5-4a3b-914e-380f6d47f177" 00:12:14.914 ], 00:12:14.914 "product_name": "Malloc disk", 00:12:14.914 "block_size": 512, 00:12:14.914 "num_blocks": 65536, 00:12:14.914 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:14.914 "assigned_rate_limits": { 00:12:14.914 "rw_ios_per_sec": 0, 00:12:14.914 "rw_mbytes_per_sec": 0, 00:12:14.914 "r_mbytes_per_sec": 0, 00:12:14.914 "w_mbytes_per_sec": 0 00:12:14.914 }, 00:12:14.914 "claimed": true, 00:12:14.914 "claim_type": "exclusive_write", 00:12:14.914 "zoned": false, 00:12:14.914 "supported_io_types": { 00:12:14.914 "read": true, 00:12:14.914 "write": true, 00:12:14.914 "unmap": true, 00:12:14.914 "flush": true, 00:12:14.914 "reset": true, 00:12:14.914 "nvme_admin": false, 00:12:14.914 "nvme_io": false, 00:12:14.914 "nvme_io_md": false, 00:12:14.914 "write_zeroes": true, 00:12:14.914 "zcopy": true, 00:12:14.914 "get_zone_info": false, 00:12:14.914 "zone_management": false, 00:12:14.914 "zone_append": false, 00:12:14.914 "compare": false, 00:12:14.914 "compare_and_write": false, 00:12:14.914 "abort": true, 00:12:14.914 "seek_hole": false, 00:12:14.914 "seek_data": false, 00:12:14.914 "copy": true, 00:12:14.914 "nvme_iov_md": false 00:12:14.914 }, 00:12:14.914 "memory_domains": [ 00:12:14.914 { 00:12:14.914 "dma_device_id": "system", 00:12:14.914 "dma_device_type": 1 00:12:14.914 }, 00:12:14.914 { 00:12:14.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.914 "dma_device_type": 2 00:12:14.914 } 00:12:14.914 ], 00:12:14.914 "driver_specific": {} 00:12:14.914 } 00:12:14.914 ] 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.914 10:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.914 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.914 "name": "Existed_Raid", 00:12:14.914 "uuid": "0bc087fc-3b03-44f8-8761-104dd2cdab01", 00:12:14.914 "strip_size_kb": 0, 00:12:14.914 "state": "online", 00:12:14.914 "raid_level": "raid1", 00:12:14.914 "superblock": false, 00:12:14.914 "num_base_bdevs": 4, 00:12:14.914 "num_base_bdevs_discovered": 4, 00:12:14.914 "num_base_bdevs_operational": 4, 00:12:14.914 "base_bdevs_list": [ 00:12:14.914 { 00:12:14.915 "name": "NewBaseBdev", 00:12:14.915 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:14.915 "is_configured": true, 00:12:14.915 "data_offset": 0, 00:12:14.915 "data_size": 65536 00:12:14.915 }, 00:12:14.915 { 00:12:14.915 "name": "BaseBdev2", 00:12:14.915 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:14.915 "is_configured": true, 00:12:14.915 "data_offset": 0, 00:12:14.915 "data_size": 65536 00:12:14.915 }, 00:12:14.915 { 00:12:14.915 "name": "BaseBdev3", 00:12:14.915 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:14.915 "is_configured": true, 00:12:14.915 "data_offset": 0, 00:12:14.915 "data_size": 65536 00:12:14.915 }, 00:12:14.915 { 00:12:14.915 "name": "BaseBdev4", 00:12:14.915 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:14.915 "is_configured": true, 00:12:14.915 "data_offset": 0, 00:12:14.915 "data_size": 65536 00:12:14.915 } 00:12:14.915 ] 00:12:14.915 }' 00:12:14.915 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.915 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.482 [2024-11-19 10:06:29.476215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.482 "name": "Existed_Raid", 00:12:15.482 "aliases": [ 00:12:15.482 "0bc087fc-3b03-44f8-8761-104dd2cdab01" 00:12:15.482 ], 00:12:15.482 "product_name": "Raid Volume", 00:12:15.482 "block_size": 512, 00:12:15.482 "num_blocks": 65536, 00:12:15.482 "uuid": "0bc087fc-3b03-44f8-8761-104dd2cdab01", 00:12:15.482 "assigned_rate_limits": { 00:12:15.482 "rw_ios_per_sec": 0, 00:12:15.482 "rw_mbytes_per_sec": 0, 00:12:15.482 "r_mbytes_per_sec": 0, 00:12:15.482 "w_mbytes_per_sec": 0 00:12:15.482 }, 00:12:15.482 "claimed": false, 00:12:15.482 "zoned": false, 00:12:15.482 "supported_io_types": { 00:12:15.482 "read": true, 00:12:15.482 "write": true, 00:12:15.482 "unmap": false, 00:12:15.482 "flush": false, 00:12:15.482 "reset": true, 00:12:15.482 "nvme_admin": false, 00:12:15.482 "nvme_io": false, 00:12:15.482 "nvme_io_md": false, 00:12:15.482 "write_zeroes": true, 00:12:15.482 "zcopy": false, 00:12:15.482 "get_zone_info": false, 00:12:15.482 "zone_management": false, 00:12:15.482 "zone_append": false, 00:12:15.482 "compare": false, 00:12:15.482 "compare_and_write": false, 00:12:15.482 "abort": false, 00:12:15.482 "seek_hole": false, 00:12:15.482 "seek_data": false, 00:12:15.482 "copy": false, 00:12:15.482 "nvme_iov_md": false 00:12:15.482 }, 00:12:15.482 "memory_domains": [ 00:12:15.482 { 00:12:15.482 "dma_device_id": "system", 00:12:15.482 "dma_device_type": 1 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.482 "dma_device_type": 2 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "system", 00:12:15.482 "dma_device_type": 1 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.482 "dma_device_type": 2 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "system", 00:12:15.482 "dma_device_type": 1 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.482 "dma_device_type": 2 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "system", 00:12:15.482 "dma_device_type": 1 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.482 "dma_device_type": 2 00:12:15.482 } 00:12:15.482 ], 00:12:15.482 "driver_specific": { 00:12:15.482 "raid": { 00:12:15.482 "uuid": "0bc087fc-3b03-44f8-8761-104dd2cdab01", 00:12:15.482 "strip_size_kb": 0, 00:12:15.482 "state": "online", 00:12:15.482 "raid_level": "raid1", 00:12:15.482 "superblock": false, 00:12:15.482 "num_base_bdevs": 4, 00:12:15.482 "num_base_bdevs_discovered": 4, 00:12:15.482 "num_base_bdevs_operational": 4, 00:12:15.482 "base_bdevs_list": [ 00:12:15.482 { 00:12:15.482 "name": "NewBaseBdev", 00:12:15.482 "uuid": "c883728b-e3a5-4a3b-914e-380f6d47f177", 00:12:15.482 "is_configured": true, 00:12:15.482 "data_offset": 0, 00:12:15.482 "data_size": 65536 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "name": "BaseBdev2", 00:12:15.482 "uuid": "3b51ff38-0a07-4a31-be4d-6f9060a47682", 00:12:15.482 "is_configured": true, 00:12:15.482 "data_offset": 0, 00:12:15.482 "data_size": 65536 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "name": "BaseBdev3", 00:12:15.482 "uuid": "94b73f73-2231-4e15-a106-4b9b67cf2ce9", 00:12:15.482 "is_configured": true, 00:12:15.482 "data_offset": 0, 00:12:15.482 "data_size": 65536 00:12:15.482 }, 00:12:15.482 { 00:12:15.482 "name": "BaseBdev4", 00:12:15.482 "uuid": "8cb27492-dd25-4af1-8e8c-15e717b98b9d", 00:12:15.482 "is_configured": true, 00:12:15.482 "data_offset": 0, 00:12:15.482 "data_size": 65536 00:12:15.482 } 00:12:15.482 ] 00:12:15.482 } 00:12:15.482 } 00:12:15.482 }' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:15.482 BaseBdev2 00:12:15.482 BaseBdev3 00:12:15.482 BaseBdev4' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.482 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.483 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.483 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.483 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:15.483 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.483 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.483 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.742 [2024-11-19 10:06:29.847816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.742 [2024-11-19 10:06:29.847868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.742 [2024-11-19 10:06:29.847994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.742 [2024-11-19 10:06:29.848453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.742 [2024-11-19 10:06:29.848475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73211 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73211 ']' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73211 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73211 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.742 killing process with pid 73211 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73211' 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73211 00:12:15.742 [2024-11-19 10:06:29.891757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.742 10:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73211 00:12:16.309 [2024-11-19 10:06:30.280360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.268 ************************************ 00:12:17.268 END TEST raid_state_function_test 00:12:17.268 ************************************ 00:12:17.268 10:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:17.268 00:12:17.268 real 0m13.237s 00:12:17.268 user 0m21.672s 00:12:17.268 sys 0m1.947s 00:12:17.268 10:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.268 10:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.527 10:06:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:17.527 10:06:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:17.527 10:06:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.527 10:06:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.527 ************************************ 00:12:17.527 START TEST raid_state_function_test_sb 00:12:17.527 ************************************ 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:17.527 Process raid pid: 73899 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73899 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73899' 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73899 00:12:17.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73899 ']' 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.527 10:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.527 [2024-11-19 10:06:31.637368] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:12:17.527 [2024-11-19 10:06:31.637860] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.786 [2024-11-19 10:06:31.836857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.045 [2024-11-19 10:06:32.024157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.304 [2024-11-19 10:06:32.287712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.304 [2024-11-19 10:06:32.287771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.563 [2024-11-19 10:06:32.665975] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.563 [2024-11-19 10:06:32.666044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.563 [2024-11-19 10:06:32.666064] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.563 [2024-11-19 10:06:32.666083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.563 [2024-11-19 10:06:32.666094] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.563 [2024-11-19 10:06:32.666111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.563 [2024-11-19 10:06:32.666121] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:18.563 [2024-11-19 10:06:32.666136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.563 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.563 "name": "Existed_Raid", 00:12:18.563 "uuid": "ff33d761-22f5-428d-8b0a-84f8ea3abc8b", 00:12:18.563 "strip_size_kb": 0, 00:12:18.563 "state": "configuring", 00:12:18.563 "raid_level": "raid1", 00:12:18.563 "superblock": true, 00:12:18.563 "num_base_bdevs": 4, 00:12:18.563 "num_base_bdevs_discovered": 0, 00:12:18.563 "num_base_bdevs_operational": 4, 00:12:18.563 "base_bdevs_list": [ 00:12:18.563 { 00:12:18.563 "name": "BaseBdev1", 00:12:18.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.563 "is_configured": false, 00:12:18.563 "data_offset": 0, 00:12:18.563 "data_size": 0 00:12:18.563 }, 00:12:18.563 { 00:12:18.563 "name": "BaseBdev2", 00:12:18.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.563 "is_configured": false, 00:12:18.563 "data_offset": 0, 00:12:18.563 "data_size": 0 00:12:18.563 }, 00:12:18.564 { 00:12:18.564 "name": "BaseBdev3", 00:12:18.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.564 "is_configured": false, 00:12:18.564 "data_offset": 0, 00:12:18.564 "data_size": 0 00:12:18.564 }, 00:12:18.564 { 00:12:18.564 "name": "BaseBdev4", 00:12:18.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.564 "is_configured": false, 00:12:18.564 "data_offset": 0, 00:12:18.564 "data_size": 0 00:12:18.564 } 00:12:18.564 ] 00:12:18.564 }' 00:12:18.564 10:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.564 10:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [2024-11-19 10:06:33.206076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.132 [2024-11-19 10:06:33.206144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [2024-11-19 10:06:33.218117] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.132 [2024-11-19 10:06:33.218188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.132 [2024-11-19 10:06:33.218206] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.132 [2024-11-19 10:06:33.218223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.132 [2024-11-19 10:06:33.218234] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.132 [2024-11-19 10:06:33.218251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.132 [2024-11-19 10:06:33.218261] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.132 [2024-11-19 10:06:33.218276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [2024-11-19 10:06:33.270322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.132 BaseBdev1 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [ 00:12:19.132 { 00:12:19.132 "name": "BaseBdev1", 00:12:19.132 "aliases": [ 00:12:19.132 "84defd90-4426-4ccb-8bb2-91567308da61" 00:12:19.132 ], 00:12:19.132 "product_name": "Malloc disk", 00:12:19.132 "block_size": 512, 00:12:19.132 "num_blocks": 65536, 00:12:19.132 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:19.132 "assigned_rate_limits": { 00:12:19.132 "rw_ios_per_sec": 0, 00:12:19.132 "rw_mbytes_per_sec": 0, 00:12:19.132 "r_mbytes_per_sec": 0, 00:12:19.132 "w_mbytes_per_sec": 0 00:12:19.132 }, 00:12:19.132 "claimed": true, 00:12:19.132 "claim_type": "exclusive_write", 00:12:19.132 "zoned": false, 00:12:19.132 "supported_io_types": { 00:12:19.132 "read": true, 00:12:19.132 "write": true, 00:12:19.132 "unmap": true, 00:12:19.132 "flush": true, 00:12:19.132 "reset": true, 00:12:19.132 "nvme_admin": false, 00:12:19.132 "nvme_io": false, 00:12:19.132 "nvme_io_md": false, 00:12:19.132 "write_zeroes": true, 00:12:19.132 "zcopy": true, 00:12:19.132 "get_zone_info": false, 00:12:19.132 "zone_management": false, 00:12:19.132 "zone_append": false, 00:12:19.132 "compare": false, 00:12:19.132 "compare_and_write": false, 00:12:19.132 "abort": true, 00:12:19.132 "seek_hole": false, 00:12:19.132 "seek_data": false, 00:12:19.132 "copy": true, 00:12:19.132 "nvme_iov_md": false 00:12:19.132 }, 00:12:19.132 "memory_domains": [ 00:12:19.132 { 00:12:19.132 "dma_device_id": "system", 00:12:19.132 "dma_device_type": 1 00:12:19.132 }, 00:12:19.132 { 00:12:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.132 "dma_device_type": 2 00:12:19.132 } 00:12:19.132 ], 00:12:19.132 "driver_specific": {} 00:12:19.132 } 00:12:19.132 ] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.391 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.391 "name": "Existed_Raid", 00:12:19.391 "uuid": "2436c0a9-d1fc-43f9-875e-c7498dc32eca", 00:12:19.391 "strip_size_kb": 0, 00:12:19.391 "state": "configuring", 00:12:19.391 "raid_level": "raid1", 00:12:19.391 "superblock": true, 00:12:19.391 "num_base_bdevs": 4, 00:12:19.391 "num_base_bdevs_discovered": 1, 00:12:19.391 "num_base_bdevs_operational": 4, 00:12:19.391 "base_bdevs_list": [ 00:12:19.391 { 00:12:19.391 "name": "BaseBdev1", 00:12:19.391 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:19.391 "is_configured": true, 00:12:19.391 "data_offset": 2048, 00:12:19.391 "data_size": 63488 00:12:19.391 }, 00:12:19.391 { 00:12:19.391 "name": "BaseBdev2", 00:12:19.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.391 "is_configured": false, 00:12:19.391 "data_offset": 0, 00:12:19.391 "data_size": 0 00:12:19.391 }, 00:12:19.391 { 00:12:19.391 "name": "BaseBdev3", 00:12:19.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.391 "is_configured": false, 00:12:19.391 "data_offset": 0, 00:12:19.391 "data_size": 0 00:12:19.391 }, 00:12:19.391 { 00:12:19.391 "name": "BaseBdev4", 00:12:19.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.391 "is_configured": false, 00:12:19.391 "data_offset": 0, 00:12:19.391 "data_size": 0 00:12:19.391 } 00:12:19.391 ] 00:12:19.391 }' 00:12:19.391 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.391 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.651 [2024-11-19 10:06:33.858519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.651 [2024-11-19 10:06:33.858590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.651 [2024-11-19 10:06:33.866539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.651 [2024-11-19 10:06:33.869517] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.651 [2024-11-19 10:06:33.869586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.651 [2024-11-19 10:06:33.869604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.651 [2024-11-19 10:06:33.869620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.651 [2024-11-19 10:06:33.869630] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.651 [2024-11-19 10:06:33.869643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.651 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.910 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.910 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.910 "name": "Existed_Raid", 00:12:19.910 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:19.910 "strip_size_kb": 0, 00:12:19.910 "state": "configuring", 00:12:19.910 "raid_level": "raid1", 00:12:19.910 "superblock": true, 00:12:19.910 "num_base_bdevs": 4, 00:12:19.910 "num_base_bdevs_discovered": 1, 00:12:19.910 "num_base_bdevs_operational": 4, 00:12:19.910 "base_bdevs_list": [ 00:12:19.910 { 00:12:19.910 "name": "BaseBdev1", 00:12:19.910 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:19.910 "is_configured": true, 00:12:19.910 "data_offset": 2048, 00:12:19.910 "data_size": 63488 00:12:19.910 }, 00:12:19.910 { 00:12:19.910 "name": "BaseBdev2", 00:12:19.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.910 "is_configured": false, 00:12:19.910 "data_offset": 0, 00:12:19.910 "data_size": 0 00:12:19.910 }, 00:12:19.910 { 00:12:19.910 "name": "BaseBdev3", 00:12:19.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.910 "is_configured": false, 00:12:19.910 "data_offset": 0, 00:12:19.910 "data_size": 0 00:12:19.910 }, 00:12:19.910 { 00:12:19.910 "name": "BaseBdev4", 00:12:19.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.910 "is_configured": false, 00:12:19.910 "data_offset": 0, 00:12:19.910 "data_size": 0 00:12:19.910 } 00:12:19.910 ] 00:12:19.910 }' 00:12:19.910 10:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.910 10:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.169 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.169 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.169 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.429 [2024-11-19 10:06:34.437668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.429 BaseBdev2 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.429 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.429 [ 00:12:20.429 { 00:12:20.429 "name": "BaseBdev2", 00:12:20.429 "aliases": [ 00:12:20.429 "06204912-0bde-42f1-b064-3edfdc6d6f54" 00:12:20.429 ], 00:12:20.429 "product_name": "Malloc disk", 00:12:20.429 "block_size": 512, 00:12:20.429 "num_blocks": 65536, 00:12:20.429 "uuid": "06204912-0bde-42f1-b064-3edfdc6d6f54", 00:12:20.429 "assigned_rate_limits": { 00:12:20.429 "rw_ios_per_sec": 0, 00:12:20.429 "rw_mbytes_per_sec": 0, 00:12:20.429 "r_mbytes_per_sec": 0, 00:12:20.429 "w_mbytes_per_sec": 0 00:12:20.429 }, 00:12:20.429 "claimed": true, 00:12:20.429 "claim_type": "exclusive_write", 00:12:20.429 "zoned": false, 00:12:20.429 "supported_io_types": { 00:12:20.429 "read": true, 00:12:20.429 "write": true, 00:12:20.429 "unmap": true, 00:12:20.429 "flush": true, 00:12:20.429 "reset": true, 00:12:20.429 "nvme_admin": false, 00:12:20.429 "nvme_io": false, 00:12:20.429 "nvme_io_md": false, 00:12:20.429 "write_zeroes": true, 00:12:20.429 "zcopy": true, 00:12:20.429 "get_zone_info": false, 00:12:20.429 "zone_management": false, 00:12:20.429 "zone_append": false, 00:12:20.429 "compare": false, 00:12:20.429 "compare_and_write": false, 00:12:20.429 "abort": true, 00:12:20.429 "seek_hole": false, 00:12:20.430 "seek_data": false, 00:12:20.430 "copy": true, 00:12:20.430 "nvme_iov_md": false 00:12:20.430 }, 00:12:20.430 "memory_domains": [ 00:12:20.430 { 00:12:20.430 "dma_device_id": "system", 00:12:20.430 "dma_device_type": 1 00:12:20.430 }, 00:12:20.430 { 00:12:20.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.430 "dma_device_type": 2 00:12:20.430 } 00:12:20.430 ], 00:12:20.430 "driver_specific": {} 00:12:20.430 } 00:12:20.430 ] 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.430 "name": "Existed_Raid", 00:12:20.430 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:20.430 "strip_size_kb": 0, 00:12:20.430 "state": "configuring", 00:12:20.430 "raid_level": "raid1", 00:12:20.430 "superblock": true, 00:12:20.430 "num_base_bdevs": 4, 00:12:20.430 "num_base_bdevs_discovered": 2, 00:12:20.430 "num_base_bdevs_operational": 4, 00:12:20.430 "base_bdevs_list": [ 00:12:20.430 { 00:12:20.430 "name": "BaseBdev1", 00:12:20.430 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:20.430 "is_configured": true, 00:12:20.430 "data_offset": 2048, 00:12:20.430 "data_size": 63488 00:12:20.430 }, 00:12:20.430 { 00:12:20.430 "name": "BaseBdev2", 00:12:20.430 "uuid": "06204912-0bde-42f1-b064-3edfdc6d6f54", 00:12:20.430 "is_configured": true, 00:12:20.430 "data_offset": 2048, 00:12:20.430 "data_size": 63488 00:12:20.430 }, 00:12:20.430 { 00:12:20.430 "name": "BaseBdev3", 00:12:20.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.430 "is_configured": false, 00:12:20.430 "data_offset": 0, 00:12:20.430 "data_size": 0 00:12:20.430 }, 00:12:20.430 { 00:12:20.430 "name": "BaseBdev4", 00:12:20.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.430 "is_configured": false, 00:12:20.430 "data_offset": 0, 00:12:20.430 "data_size": 0 00:12:20.430 } 00:12:20.430 ] 00:12:20.430 }' 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.430 10:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.998 10:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.998 [2024-11-19 10:06:35.054294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.998 BaseBdev3 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.998 [ 00:12:20.998 { 00:12:20.998 "name": "BaseBdev3", 00:12:20.998 "aliases": [ 00:12:20.998 "1315941e-b961-46e2-ad33-f9382981f54f" 00:12:20.998 ], 00:12:20.998 "product_name": "Malloc disk", 00:12:20.998 "block_size": 512, 00:12:20.998 "num_blocks": 65536, 00:12:20.998 "uuid": "1315941e-b961-46e2-ad33-f9382981f54f", 00:12:20.998 "assigned_rate_limits": { 00:12:20.998 "rw_ios_per_sec": 0, 00:12:20.998 "rw_mbytes_per_sec": 0, 00:12:20.998 "r_mbytes_per_sec": 0, 00:12:20.998 "w_mbytes_per_sec": 0 00:12:20.998 }, 00:12:20.998 "claimed": true, 00:12:20.998 "claim_type": "exclusive_write", 00:12:20.998 "zoned": false, 00:12:20.998 "supported_io_types": { 00:12:20.998 "read": true, 00:12:20.998 "write": true, 00:12:20.998 "unmap": true, 00:12:20.998 "flush": true, 00:12:20.998 "reset": true, 00:12:20.998 "nvme_admin": false, 00:12:20.998 "nvme_io": false, 00:12:20.998 "nvme_io_md": false, 00:12:20.998 "write_zeroes": true, 00:12:20.998 "zcopy": true, 00:12:20.998 "get_zone_info": false, 00:12:20.998 "zone_management": false, 00:12:20.998 "zone_append": false, 00:12:20.998 "compare": false, 00:12:20.998 "compare_and_write": false, 00:12:20.998 "abort": true, 00:12:20.998 "seek_hole": false, 00:12:20.998 "seek_data": false, 00:12:20.998 "copy": true, 00:12:20.998 "nvme_iov_md": false 00:12:20.998 }, 00:12:20.998 "memory_domains": [ 00:12:20.998 { 00:12:20.998 "dma_device_id": "system", 00:12:20.998 "dma_device_type": 1 00:12:20.998 }, 00:12:20.998 { 00:12:20.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.998 "dma_device_type": 2 00:12:20.998 } 00:12:20.998 ], 00:12:20.998 "driver_specific": {} 00:12:20.998 } 00:12:20.998 ] 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.998 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.999 "name": "Existed_Raid", 00:12:20.999 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:20.999 "strip_size_kb": 0, 00:12:20.999 "state": "configuring", 00:12:20.999 "raid_level": "raid1", 00:12:20.999 "superblock": true, 00:12:20.999 "num_base_bdevs": 4, 00:12:20.999 "num_base_bdevs_discovered": 3, 00:12:20.999 "num_base_bdevs_operational": 4, 00:12:20.999 "base_bdevs_list": [ 00:12:20.999 { 00:12:20.999 "name": "BaseBdev1", 00:12:20.999 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:20.999 "is_configured": true, 00:12:20.999 "data_offset": 2048, 00:12:20.999 "data_size": 63488 00:12:20.999 }, 00:12:20.999 { 00:12:20.999 "name": "BaseBdev2", 00:12:20.999 "uuid": "06204912-0bde-42f1-b064-3edfdc6d6f54", 00:12:20.999 "is_configured": true, 00:12:20.999 "data_offset": 2048, 00:12:20.999 "data_size": 63488 00:12:20.999 }, 00:12:20.999 { 00:12:20.999 "name": "BaseBdev3", 00:12:20.999 "uuid": "1315941e-b961-46e2-ad33-f9382981f54f", 00:12:20.999 "is_configured": true, 00:12:20.999 "data_offset": 2048, 00:12:20.999 "data_size": 63488 00:12:20.999 }, 00:12:20.999 { 00:12:20.999 "name": "BaseBdev4", 00:12:20.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.999 "is_configured": false, 00:12:20.999 "data_offset": 0, 00:12:20.999 "data_size": 0 00:12:20.999 } 00:12:20.999 ] 00:12:20.999 }' 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.999 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.566 [2024-11-19 10:06:35.693975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:21.566 [2024-11-19 10:06:35.694345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:21.566 [2024-11-19 10:06:35.694368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.566 BaseBdev4 00:12:21.566 [2024-11-19 10:06:35.694731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:21.566 [2024-11-19 10:06:35.694965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:21.566 [2024-11-19 10:06:35.694988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:21.566 [2024-11-19 10:06:35.695188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.566 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.567 [ 00:12:21.567 { 00:12:21.567 "name": "BaseBdev4", 00:12:21.567 "aliases": [ 00:12:21.567 "acb4e3ca-e954-405c-b108-4cf011afcc3c" 00:12:21.567 ], 00:12:21.567 "product_name": "Malloc disk", 00:12:21.567 "block_size": 512, 00:12:21.567 "num_blocks": 65536, 00:12:21.567 "uuid": "acb4e3ca-e954-405c-b108-4cf011afcc3c", 00:12:21.567 "assigned_rate_limits": { 00:12:21.567 "rw_ios_per_sec": 0, 00:12:21.567 "rw_mbytes_per_sec": 0, 00:12:21.567 "r_mbytes_per_sec": 0, 00:12:21.567 "w_mbytes_per_sec": 0 00:12:21.567 }, 00:12:21.567 "claimed": true, 00:12:21.567 "claim_type": "exclusive_write", 00:12:21.567 "zoned": false, 00:12:21.567 "supported_io_types": { 00:12:21.567 "read": true, 00:12:21.567 "write": true, 00:12:21.567 "unmap": true, 00:12:21.567 "flush": true, 00:12:21.567 "reset": true, 00:12:21.567 "nvme_admin": false, 00:12:21.567 "nvme_io": false, 00:12:21.567 "nvme_io_md": false, 00:12:21.567 "write_zeroes": true, 00:12:21.567 "zcopy": true, 00:12:21.567 "get_zone_info": false, 00:12:21.567 "zone_management": false, 00:12:21.567 "zone_append": false, 00:12:21.567 "compare": false, 00:12:21.567 "compare_and_write": false, 00:12:21.567 "abort": true, 00:12:21.567 "seek_hole": false, 00:12:21.567 "seek_data": false, 00:12:21.567 "copy": true, 00:12:21.567 "nvme_iov_md": false 00:12:21.567 }, 00:12:21.567 "memory_domains": [ 00:12:21.567 { 00:12:21.567 "dma_device_id": "system", 00:12:21.567 "dma_device_type": 1 00:12:21.567 }, 00:12:21.567 { 00:12:21.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.567 "dma_device_type": 2 00:12:21.567 } 00:12:21.567 ], 00:12:21.567 "driver_specific": {} 00:12:21.567 } 00:12:21.567 ] 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.567 "name": "Existed_Raid", 00:12:21.567 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:21.567 "strip_size_kb": 0, 00:12:21.567 "state": "online", 00:12:21.567 "raid_level": "raid1", 00:12:21.567 "superblock": true, 00:12:21.567 "num_base_bdevs": 4, 00:12:21.567 "num_base_bdevs_discovered": 4, 00:12:21.567 "num_base_bdevs_operational": 4, 00:12:21.567 "base_bdevs_list": [ 00:12:21.567 { 00:12:21.567 "name": "BaseBdev1", 00:12:21.567 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:21.567 "is_configured": true, 00:12:21.567 "data_offset": 2048, 00:12:21.567 "data_size": 63488 00:12:21.567 }, 00:12:21.567 { 00:12:21.567 "name": "BaseBdev2", 00:12:21.567 "uuid": "06204912-0bde-42f1-b064-3edfdc6d6f54", 00:12:21.567 "is_configured": true, 00:12:21.567 "data_offset": 2048, 00:12:21.567 "data_size": 63488 00:12:21.567 }, 00:12:21.567 { 00:12:21.567 "name": "BaseBdev3", 00:12:21.567 "uuid": "1315941e-b961-46e2-ad33-f9382981f54f", 00:12:21.567 "is_configured": true, 00:12:21.567 "data_offset": 2048, 00:12:21.567 "data_size": 63488 00:12:21.567 }, 00:12:21.567 { 00:12:21.567 "name": "BaseBdev4", 00:12:21.567 "uuid": "acb4e3ca-e954-405c-b108-4cf011afcc3c", 00:12:21.567 "is_configured": true, 00:12:21.567 "data_offset": 2048, 00:12:21.567 "data_size": 63488 00:12:21.567 } 00:12:21.567 ] 00:12:21.567 }' 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.567 10:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.135 [2024-11-19 10:06:36.262686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.135 "name": "Existed_Raid", 00:12:22.135 "aliases": [ 00:12:22.135 "f9b6f102-7013-49e3-aa80-f07f3f548bc8" 00:12:22.135 ], 00:12:22.135 "product_name": "Raid Volume", 00:12:22.135 "block_size": 512, 00:12:22.135 "num_blocks": 63488, 00:12:22.135 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:22.135 "assigned_rate_limits": { 00:12:22.135 "rw_ios_per_sec": 0, 00:12:22.135 "rw_mbytes_per_sec": 0, 00:12:22.135 "r_mbytes_per_sec": 0, 00:12:22.135 "w_mbytes_per_sec": 0 00:12:22.135 }, 00:12:22.135 "claimed": false, 00:12:22.135 "zoned": false, 00:12:22.135 "supported_io_types": { 00:12:22.135 "read": true, 00:12:22.135 "write": true, 00:12:22.135 "unmap": false, 00:12:22.135 "flush": false, 00:12:22.135 "reset": true, 00:12:22.135 "nvme_admin": false, 00:12:22.135 "nvme_io": false, 00:12:22.135 "nvme_io_md": false, 00:12:22.135 "write_zeroes": true, 00:12:22.135 "zcopy": false, 00:12:22.135 "get_zone_info": false, 00:12:22.135 "zone_management": false, 00:12:22.135 "zone_append": false, 00:12:22.135 "compare": false, 00:12:22.135 "compare_and_write": false, 00:12:22.135 "abort": false, 00:12:22.135 "seek_hole": false, 00:12:22.135 "seek_data": false, 00:12:22.135 "copy": false, 00:12:22.135 "nvme_iov_md": false 00:12:22.135 }, 00:12:22.135 "memory_domains": [ 00:12:22.135 { 00:12:22.135 "dma_device_id": "system", 00:12:22.135 "dma_device_type": 1 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.135 "dma_device_type": 2 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "system", 00:12:22.135 "dma_device_type": 1 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.135 "dma_device_type": 2 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "system", 00:12:22.135 "dma_device_type": 1 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.135 "dma_device_type": 2 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "system", 00:12:22.135 "dma_device_type": 1 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.135 "dma_device_type": 2 00:12:22.135 } 00:12:22.135 ], 00:12:22.135 "driver_specific": { 00:12:22.135 "raid": { 00:12:22.135 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:22.135 "strip_size_kb": 0, 00:12:22.135 "state": "online", 00:12:22.135 "raid_level": "raid1", 00:12:22.135 "superblock": true, 00:12:22.135 "num_base_bdevs": 4, 00:12:22.135 "num_base_bdevs_discovered": 4, 00:12:22.135 "num_base_bdevs_operational": 4, 00:12:22.135 "base_bdevs_list": [ 00:12:22.135 { 00:12:22.135 "name": "BaseBdev1", 00:12:22.135 "uuid": "84defd90-4426-4ccb-8bb2-91567308da61", 00:12:22.135 "is_configured": true, 00:12:22.135 "data_offset": 2048, 00:12:22.135 "data_size": 63488 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "name": "BaseBdev2", 00:12:22.135 "uuid": "06204912-0bde-42f1-b064-3edfdc6d6f54", 00:12:22.135 "is_configured": true, 00:12:22.135 "data_offset": 2048, 00:12:22.135 "data_size": 63488 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "name": "BaseBdev3", 00:12:22.135 "uuid": "1315941e-b961-46e2-ad33-f9382981f54f", 00:12:22.135 "is_configured": true, 00:12:22.135 "data_offset": 2048, 00:12:22.135 "data_size": 63488 00:12:22.135 }, 00:12:22.135 { 00:12:22.135 "name": "BaseBdev4", 00:12:22.135 "uuid": "acb4e3ca-e954-405c-b108-4cf011afcc3c", 00:12:22.135 "is_configured": true, 00:12:22.135 "data_offset": 2048, 00:12:22.135 "data_size": 63488 00:12:22.135 } 00:12:22.135 ] 00:12:22.135 } 00:12:22.135 } 00:12:22.135 }' 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:22.135 BaseBdev2 00:12:22.135 BaseBdev3 00:12:22.135 BaseBdev4' 00:12:22.135 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.395 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.654 [2024-11-19 10:06:36.646438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.654 "name": "Existed_Raid", 00:12:22.654 "uuid": "f9b6f102-7013-49e3-aa80-f07f3f548bc8", 00:12:22.654 "strip_size_kb": 0, 00:12:22.654 "state": "online", 00:12:22.654 "raid_level": "raid1", 00:12:22.654 "superblock": true, 00:12:22.654 "num_base_bdevs": 4, 00:12:22.654 "num_base_bdevs_discovered": 3, 00:12:22.654 "num_base_bdevs_operational": 3, 00:12:22.654 "base_bdevs_list": [ 00:12:22.654 { 00:12:22.654 "name": null, 00:12:22.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.654 "is_configured": false, 00:12:22.654 "data_offset": 0, 00:12:22.654 "data_size": 63488 00:12:22.654 }, 00:12:22.654 { 00:12:22.654 "name": "BaseBdev2", 00:12:22.654 "uuid": "06204912-0bde-42f1-b064-3edfdc6d6f54", 00:12:22.654 "is_configured": true, 00:12:22.654 "data_offset": 2048, 00:12:22.654 "data_size": 63488 00:12:22.654 }, 00:12:22.654 { 00:12:22.654 "name": "BaseBdev3", 00:12:22.654 "uuid": "1315941e-b961-46e2-ad33-f9382981f54f", 00:12:22.654 "is_configured": true, 00:12:22.654 "data_offset": 2048, 00:12:22.654 "data_size": 63488 00:12:22.654 }, 00:12:22.654 { 00:12:22.654 "name": "BaseBdev4", 00:12:22.654 "uuid": "acb4e3ca-e954-405c-b108-4cf011afcc3c", 00:12:22.654 "is_configured": true, 00:12:22.654 "data_offset": 2048, 00:12:22.654 "data_size": 63488 00:12:22.654 } 00:12:22.654 ] 00:12:22.654 }' 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.654 10:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.221 [2024-11-19 10:06:37.316750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.221 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.480 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.480 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.480 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:23.480 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.481 [2024-11-19 10:06:37.474854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.481 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.481 [2024-11-19 10:06:37.628853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:23.481 [2024-11-19 10:06:37.629032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.740 [2024-11-19 10:06:37.723317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.740 [2024-11-19 10:06:37.723664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.740 [2024-11-19 10:06:37.723908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 BaseBdev2 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 [ 00:12:23.740 { 00:12:23.740 "name": "BaseBdev2", 00:12:23.740 "aliases": [ 00:12:23.740 "b4e059f9-fddf-446d-a665-5a2a344e1386" 00:12:23.740 ], 00:12:23.740 "product_name": "Malloc disk", 00:12:23.740 "block_size": 512, 00:12:23.740 "num_blocks": 65536, 00:12:23.740 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:23.740 "assigned_rate_limits": { 00:12:23.740 "rw_ios_per_sec": 0, 00:12:23.740 "rw_mbytes_per_sec": 0, 00:12:23.740 "r_mbytes_per_sec": 0, 00:12:23.740 "w_mbytes_per_sec": 0 00:12:23.740 }, 00:12:23.740 "claimed": false, 00:12:23.740 "zoned": false, 00:12:23.740 "supported_io_types": { 00:12:23.740 "read": true, 00:12:23.740 "write": true, 00:12:23.740 "unmap": true, 00:12:23.740 "flush": true, 00:12:23.740 "reset": true, 00:12:23.740 "nvme_admin": false, 00:12:23.740 "nvme_io": false, 00:12:23.740 "nvme_io_md": false, 00:12:23.740 "write_zeroes": true, 00:12:23.740 "zcopy": true, 00:12:23.740 "get_zone_info": false, 00:12:23.740 "zone_management": false, 00:12:23.740 "zone_append": false, 00:12:23.740 "compare": false, 00:12:23.740 "compare_and_write": false, 00:12:23.740 "abort": true, 00:12:23.740 "seek_hole": false, 00:12:23.740 "seek_data": false, 00:12:23.740 "copy": true, 00:12:23.740 "nvme_iov_md": false 00:12:23.740 }, 00:12:23.740 "memory_domains": [ 00:12:23.740 { 00:12:23.740 "dma_device_id": "system", 00:12:23.740 "dma_device_type": 1 00:12:23.740 }, 00:12:23.740 { 00:12:23.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.740 "dma_device_type": 2 00:12:23.740 } 00:12:23.740 ], 00:12:23.740 "driver_specific": {} 00:12:23.740 } 00:12:23.740 ] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 BaseBdev3 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.740 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.741 [ 00:12:23.741 { 00:12:23.741 "name": "BaseBdev3", 00:12:23.741 "aliases": [ 00:12:23.741 "3ff1eba1-e362-413d-8d51-612268a56543" 00:12:23.741 ], 00:12:23.741 "product_name": "Malloc disk", 00:12:23.741 "block_size": 512, 00:12:23.741 "num_blocks": 65536, 00:12:23.741 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:23.741 "assigned_rate_limits": { 00:12:23.741 "rw_ios_per_sec": 0, 00:12:23.741 "rw_mbytes_per_sec": 0, 00:12:23.741 "r_mbytes_per_sec": 0, 00:12:23.741 "w_mbytes_per_sec": 0 00:12:23.741 }, 00:12:23.741 "claimed": false, 00:12:23.741 "zoned": false, 00:12:23.741 "supported_io_types": { 00:12:23.741 "read": true, 00:12:23.741 "write": true, 00:12:23.741 "unmap": true, 00:12:23.741 "flush": true, 00:12:23.741 "reset": true, 00:12:23.741 "nvme_admin": false, 00:12:23.741 "nvme_io": false, 00:12:23.741 "nvme_io_md": false, 00:12:23.741 "write_zeroes": true, 00:12:23.741 "zcopy": true, 00:12:23.741 "get_zone_info": false, 00:12:23.741 "zone_management": false, 00:12:23.741 "zone_append": false, 00:12:23.741 "compare": false, 00:12:23.741 "compare_and_write": false, 00:12:23.741 "abort": true, 00:12:23.741 "seek_hole": false, 00:12:23.741 "seek_data": false, 00:12:23.741 "copy": true, 00:12:23.741 "nvme_iov_md": false 00:12:23.741 }, 00:12:23.741 "memory_domains": [ 00:12:23.741 { 00:12:23.741 "dma_device_id": "system", 00:12:23.741 "dma_device_type": 1 00:12:23.741 }, 00:12:23.741 { 00:12:23.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.741 "dma_device_type": 2 00:12:23.741 } 00:12:23.741 ], 00:12:23.741 "driver_specific": {} 00:12:23.741 } 00:12:23.741 ] 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.741 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 BaseBdev4 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.001 10:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 [ 00:12:24.001 { 00:12:24.001 "name": "BaseBdev4", 00:12:24.001 "aliases": [ 00:12:24.001 "20cdbd0b-7487-4194-8dec-f459bc92f31e" 00:12:24.001 ], 00:12:24.001 "product_name": "Malloc disk", 00:12:24.001 "block_size": 512, 00:12:24.001 "num_blocks": 65536, 00:12:24.001 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:24.001 "assigned_rate_limits": { 00:12:24.001 "rw_ios_per_sec": 0, 00:12:24.001 "rw_mbytes_per_sec": 0, 00:12:24.001 "r_mbytes_per_sec": 0, 00:12:24.001 "w_mbytes_per_sec": 0 00:12:24.001 }, 00:12:24.001 "claimed": false, 00:12:24.001 "zoned": false, 00:12:24.001 "supported_io_types": { 00:12:24.001 "read": true, 00:12:24.001 "write": true, 00:12:24.001 "unmap": true, 00:12:24.001 "flush": true, 00:12:24.001 "reset": true, 00:12:24.001 "nvme_admin": false, 00:12:24.001 "nvme_io": false, 00:12:24.001 "nvme_io_md": false, 00:12:24.001 "write_zeroes": true, 00:12:24.001 "zcopy": true, 00:12:24.001 "get_zone_info": false, 00:12:24.001 "zone_management": false, 00:12:24.001 "zone_append": false, 00:12:24.001 "compare": false, 00:12:24.001 "compare_and_write": false, 00:12:24.001 "abort": true, 00:12:24.001 "seek_hole": false, 00:12:24.001 "seek_data": false, 00:12:24.001 "copy": true, 00:12:24.001 "nvme_iov_md": false 00:12:24.001 }, 00:12:24.001 "memory_domains": [ 00:12:24.001 { 00:12:24.001 "dma_device_id": "system", 00:12:24.001 "dma_device_type": 1 00:12:24.001 }, 00:12:24.001 { 00:12:24.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.001 "dma_device_type": 2 00:12:24.001 } 00:12:24.001 ], 00:12:24.001 "driver_specific": {} 00:12:24.001 } 00:12:24.001 ] 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 [2024-11-19 10:06:38.033011] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.001 [2024-11-19 10:06:38.033287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.001 [2024-11-19 10:06:38.033342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.001 [2024-11-19 10:06:38.036194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.001 [2024-11-19 10:06:38.036272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.001 "name": "Existed_Raid", 00:12:24.001 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:24.001 "strip_size_kb": 0, 00:12:24.001 "state": "configuring", 00:12:24.001 "raid_level": "raid1", 00:12:24.001 "superblock": true, 00:12:24.001 "num_base_bdevs": 4, 00:12:24.001 "num_base_bdevs_discovered": 3, 00:12:24.001 "num_base_bdevs_operational": 4, 00:12:24.001 "base_bdevs_list": [ 00:12:24.001 { 00:12:24.001 "name": "BaseBdev1", 00:12:24.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.001 "is_configured": false, 00:12:24.001 "data_offset": 0, 00:12:24.001 "data_size": 0 00:12:24.001 }, 00:12:24.001 { 00:12:24.001 "name": "BaseBdev2", 00:12:24.001 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 }, 00:12:24.001 { 00:12:24.001 "name": "BaseBdev3", 00:12:24.001 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 }, 00:12:24.001 { 00:12:24.001 "name": "BaseBdev4", 00:12:24.001 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 } 00:12:24.001 ] 00:12:24.001 }' 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.001 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.576 [2024-11-19 10:06:38.533214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.576 "name": "Existed_Raid", 00:12:24.576 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:24.576 "strip_size_kb": 0, 00:12:24.576 "state": "configuring", 00:12:24.576 "raid_level": "raid1", 00:12:24.576 "superblock": true, 00:12:24.576 "num_base_bdevs": 4, 00:12:24.576 "num_base_bdevs_discovered": 2, 00:12:24.576 "num_base_bdevs_operational": 4, 00:12:24.576 "base_bdevs_list": [ 00:12:24.576 { 00:12:24.576 "name": "BaseBdev1", 00:12:24.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.576 "is_configured": false, 00:12:24.576 "data_offset": 0, 00:12:24.576 "data_size": 0 00:12:24.576 }, 00:12:24.576 { 00:12:24.576 "name": null, 00:12:24.576 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:24.576 "is_configured": false, 00:12:24.576 "data_offset": 0, 00:12:24.576 "data_size": 63488 00:12:24.576 }, 00:12:24.576 { 00:12:24.576 "name": "BaseBdev3", 00:12:24.576 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:24.576 "is_configured": true, 00:12:24.576 "data_offset": 2048, 00:12:24.576 "data_size": 63488 00:12:24.576 }, 00:12:24.576 { 00:12:24.576 "name": "BaseBdev4", 00:12:24.576 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:24.576 "is_configured": true, 00:12:24.576 "data_offset": 2048, 00:12:24.576 "data_size": 63488 00:12:24.576 } 00:12:24.576 ] 00:12:24.576 }' 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.576 10:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.841 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:24.841 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.841 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.841 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.841 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.099 [2024-11-19 10:06:39.126596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.099 BaseBdev1 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.099 [ 00:12:25.099 { 00:12:25.099 "name": "BaseBdev1", 00:12:25.099 "aliases": [ 00:12:25.099 "f5957d26-1a5f-46d1-bd05-17c970b43570" 00:12:25.099 ], 00:12:25.099 "product_name": "Malloc disk", 00:12:25.099 "block_size": 512, 00:12:25.099 "num_blocks": 65536, 00:12:25.099 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:25.099 "assigned_rate_limits": { 00:12:25.099 "rw_ios_per_sec": 0, 00:12:25.099 "rw_mbytes_per_sec": 0, 00:12:25.099 "r_mbytes_per_sec": 0, 00:12:25.099 "w_mbytes_per_sec": 0 00:12:25.099 }, 00:12:25.099 "claimed": true, 00:12:25.099 "claim_type": "exclusive_write", 00:12:25.099 "zoned": false, 00:12:25.099 "supported_io_types": { 00:12:25.099 "read": true, 00:12:25.099 "write": true, 00:12:25.099 "unmap": true, 00:12:25.099 "flush": true, 00:12:25.099 "reset": true, 00:12:25.099 "nvme_admin": false, 00:12:25.099 "nvme_io": false, 00:12:25.099 "nvme_io_md": false, 00:12:25.099 "write_zeroes": true, 00:12:25.099 "zcopy": true, 00:12:25.099 "get_zone_info": false, 00:12:25.099 "zone_management": false, 00:12:25.099 "zone_append": false, 00:12:25.099 "compare": false, 00:12:25.099 "compare_and_write": false, 00:12:25.099 "abort": true, 00:12:25.099 "seek_hole": false, 00:12:25.099 "seek_data": false, 00:12:25.099 "copy": true, 00:12:25.099 "nvme_iov_md": false 00:12:25.099 }, 00:12:25.099 "memory_domains": [ 00:12:25.099 { 00:12:25.099 "dma_device_id": "system", 00:12:25.099 "dma_device_type": 1 00:12:25.099 }, 00:12:25.099 { 00:12:25.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.099 "dma_device_type": 2 00:12:25.099 } 00:12:25.099 ], 00:12:25.099 "driver_specific": {} 00:12:25.099 } 00:12:25.099 ] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.099 "name": "Existed_Raid", 00:12:25.099 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:25.099 "strip_size_kb": 0, 00:12:25.099 "state": "configuring", 00:12:25.099 "raid_level": "raid1", 00:12:25.099 "superblock": true, 00:12:25.099 "num_base_bdevs": 4, 00:12:25.099 "num_base_bdevs_discovered": 3, 00:12:25.099 "num_base_bdevs_operational": 4, 00:12:25.099 "base_bdevs_list": [ 00:12:25.099 { 00:12:25.099 "name": "BaseBdev1", 00:12:25.099 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:25.099 "is_configured": true, 00:12:25.099 "data_offset": 2048, 00:12:25.099 "data_size": 63488 00:12:25.099 }, 00:12:25.099 { 00:12:25.099 "name": null, 00:12:25.099 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:25.099 "is_configured": false, 00:12:25.099 "data_offset": 0, 00:12:25.099 "data_size": 63488 00:12:25.099 }, 00:12:25.099 { 00:12:25.099 "name": "BaseBdev3", 00:12:25.099 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:25.099 "is_configured": true, 00:12:25.099 "data_offset": 2048, 00:12:25.099 "data_size": 63488 00:12:25.099 }, 00:12:25.099 { 00:12:25.099 "name": "BaseBdev4", 00:12:25.099 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:25.099 "is_configured": true, 00:12:25.099 "data_offset": 2048, 00:12:25.099 "data_size": 63488 00:12:25.099 } 00:12:25.099 ] 00:12:25.099 }' 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.099 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.666 [2024-11-19 10:06:39.714880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.666 "name": "Existed_Raid", 00:12:25.666 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:25.666 "strip_size_kb": 0, 00:12:25.666 "state": "configuring", 00:12:25.666 "raid_level": "raid1", 00:12:25.666 "superblock": true, 00:12:25.666 "num_base_bdevs": 4, 00:12:25.666 "num_base_bdevs_discovered": 2, 00:12:25.666 "num_base_bdevs_operational": 4, 00:12:25.666 "base_bdevs_list": [ 00:12:25.666 { 00:12:25.666 "name": "BaseBdev1", 00:12:25.666 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:25.666 "is_configured": true, 00:12:25.666 "data_offset": 2048, 00:12:25.666 "data_size": 63488 00:12:25.666 }, 00:12:25.666 { 00:12:25.666 "name": null, 00:12:25.666 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:25.666 "is_configured": false, 00:12:25.666 "data_offset": 0, 00:12:25.666 "data_size": 63488 00:12:25.666 }, 00:12:25.666 { 00:12:25.666 "name": null, 00:12:25.666 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:25.666 "is_configured": false, 00:12:25.666 "data_offset": 0, 00:12:25.666 "data_size": 63488 00:12:25.666 }, 00:12:25.666 { 00:12:25.666 "name": "BaseBdev4", 00:12:25.666 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:25.666 "is_configured": true, 00:12:25.666 "data_offset": 2048, 00:12:25.666 "data_size": 63488 00:12:25.666 } 00:12:25.666 ] 00:12:25.666 }' 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.666 10:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.232 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.232 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.232 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.232 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.232 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.232 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.233 [2024-11-19 10:06:40.270988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.233 "name": "Existed_Raid", 00:12:26.233 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:26.233 "strip_size_kb": 0, 00:12:26.233 "state": "configuring", 00:12:26.233 "raid_level": "raid1", 00:12:26.233 "superblock": true, 00:12:26.233 "num_base_bdevs": 4, 00:12:26.233 "num_base_bdevs_discovered": 3, 00:12:26.233 "num_base_bdevs_operational": 4, 00:12:26.233 "base_bdevs_list": [ 00:12:26.233 { 00:12:26.233 "name": "BaseBdev1", 00:12:26.233 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:26.233 "is_configured": true, 00:12:26.233 "data_offset": 2048, 00:12:26.233 "data_size": 63488 00:12:26.233 }, 00:12:26.233 { 00:12:26.233 "name": null, 00:12:26.233 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:26.233 "is_configured": false, 00:12:26.233 "data_offset": 0, 00:12:26.233 "data_size": 63488 00:12:26.233 }, 00:12:26.233 { 00:12:26.233 "name": "BaseBdev3", 00:12:26.233 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:26.233 "is_configured": true, 00:12:26.233 "data_offset": 2048, 00:12:26.233 "data_size": 63488 00:12:26.233 }, 00:12:26.233 { 00:12:26.233 "name": "BaseBdev4", 00:12:26.233 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:26.233 "is_configured": true, 00:12:26.233 "data_offset": 2048, 00:12:26.233 "data_size": 63488 00:12:26.233 } 00:12:26.233 ] 00:12:26.233 }' 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.233 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.800 [2024-11-19 10:06:40.827203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.800 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.800 "name": "Existed_Raid", 00:12:26.800 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:26.800 "strip_size_kb": 0, 00:12:26.800 "state": "configuring", 00:12:26.800 "raid_level": "raid1", 00:12:26.800 "superblock": true, 00:12:26.800 "num_base_bdevs": 4, 00:12:26.800 "num_base_bdevs_discovered": 2, 00:12:26.800 "num_base_bdevs_operational": 4, 00:12:26.800 "base_bdevs_list": [ 00:12:26.800 { 00:12:26.800 "name": null, 00:12:26.800 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:26.800 "is_configured": false, 00:12:26.800 "data_offset": 0, 00:12:26.800 "data_size": 63488 00:12:26.800 }, 00:12:26.800 { 00:12:26.800 "name": null, 00:12:26.800 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:26.800 "is_configured": false, 00:12:26.800 "data_offset": 0, 00:12:26.800 "data_size": 63488 00:12:26.800 }, 00:12:26.800 { 00:12:26.800 "name": "BaseBdev3", 00:12:26.800 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:26.800 "is_configured": true, 00:12:26.800 "data_offset": 2048, 00:12:26.800 "data_size": 63488 00:12:26.800 }, 00:12:26.801 { 00:12:26.801 "name": "BaseBdev4", 00:12:26.801 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:26.801 "is_configured": true, 00:12:26.801 "data_offset": 2048, 00:12:26.801 "data_size": 63488 00:12:26.801 } 00:12:26.801 ] 00:12:26.801 }' 00:12:26.801 10:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.801 10:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.368 [2024-11-19 10:06:41.503988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.368 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.368 "name": "Existed_Raid", 00:12:27.368 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:27.368 "strip_size_kb": 0, 00:12:27.368 "state": "configuring", 00:12:27.368 "raid_level": "raid1", 00:12:27.368 "superblock": true, 00:12:27.368 "num_base_bdevs": 4, 00:12:27.368 "num_base_bdevs_discovered": 3, 00:12:27.368 "num_base_bdevs_operational": 4, 00:12:27.368 "base_bdevs_list": [ 00:12:27.368 { 00:12:27.368 "name": null, 00:12:27.368 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:27.368 "is_configured": false, 00:12:27.369 "data_offset": 0, 00:12:27.369 "data_size": 63488 00:12:27.369 }, 00:12:27.369 { 00:12:27.369 "name": "BaseBdev2", 00:12:27.369 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:27.369 "is_configured": true, 00:12:27.369 "data_offset": 2048, 00:12:27.369 "data_size": 63488 00:12:27.369 }, 00:12:27.369 { 00:12:27.369 "name": "BaseBdev3", 00:12:27.369 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:27.369 "is_configured": true, 00:12:27.369 "data_offset": 2048, 00:12:27.369 "data_size": 63488 00:12:27.369 }, 00:12:27.369 { 00:12:27.369 "name": "BaseBdev4", 00:12:27.369 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:27.369 "is_configured": true, 00:12:27.369 "data_offset": 2048, 00:12:27.369 "data_size": 63488 00:12:27.369 } 00:12:27.369 ] 00:12:27.369 }' 00:12:27.369 10:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.369 10:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.935 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f5957d26-1a5f-46d1-bd05-17c970b43570 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.194 [2024-11-19 10:06:42.223947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:28.194 [2024-11-19 10:06:42.224347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:28.194 [2024-11-19 10:06:42.224372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:28.194 NewBaseBdev 00:12:28.194 [2024-11-19 10:06:42.224747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:28.194 [2024-11-19 10:06:42.225019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:28.194 [2024-11-19 10:06:42.225197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:28.194 [2024-11-19 10:06:42.225404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.194 [ 00:12:28.194 { 00:12:28.194 "name": "NewBaseBdev", 00:12:28.194 "aliases": [ 00:12:28.194 "f5957d26-1a5f-46d1-bd05-17c970b43570" 00:12:28.194 ], 00:12:28.194 "product_name": "Malloc disk", 00:12:28.194 "block_size": 512, 00:12:28.194 "num_blocks": 65536, 00:12:28.194 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:28.194 "assigned_rate_limits": { 00:12:28.194 "rw_ios_per_sec": 0, 00:12:28.194 "rw_mbytes_per_sec": 0, 00:12:28.194 "r_mbytes_per_sec": 0, 00:12:28.194 "w_mbytes_per_sec": 0 00:12:28.194 }, 00:12:28.194 "claimed": true, 00:12:28.194 "claim_type": "exclusive_write", 00:12:28.194 "zoned": false, 00:12:28.194 "supported_io_types": { 00:12:28.194 "read": true, 00:12:28.194 "write": true, 00:12:28.194 "unmap": true, 00:12:28.194 "flush": true, 00:12:28.194 "reset": true, 00:12:28.194 "nvme_admin": false, 00:12:28.194 "nvme_io": false, 00:12:28.194 "nvme_io_md": false, 00:12:28.194 "write_zeroes": true, 00:12:28.194 "zcopy": true, 00:12:28.194 "get_zone_info": false, 00:12:28.194 "zone_management": false, 00:12:28.194 "zone_append": false, 00:12:28.194 "compare": false, 00:12:28.194 "compare_and_write": false, 00:12:28.194 "abort": true, 00:12:28.194 "seek_hole": false, 00:12:28.194 "seek_data": false, 00:12:28.194 "copy": true, 00:12:28.194 "nvme_iov_md": false 00:12:28.194 }, 00:12:28.194 "memory_domains": [ 00:12:28.194 { 00:12:28.194 "dma_device_id": "system", 00:12:28.194 "dma_device_type": 1 00:12:28.194 }, 00:12:28.194 { 00:12:28.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.194 "dma_device_type": 2 00:12:28.194 } 00:12:28.194 ], 00:12:28.194 "driver_specific": {} 00:12:28.194 } 00:12:28.194 ] 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.194 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.195 "name": "Existed_Raid", 00:12:28.195 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:28.195 "strip_size_kb": 0, 00:12:28.195 "state": "online", 00:12:28.195 "raid_level": "raid1", 00:12:28.195 "superblock": true, 00:12:28.195 "num_base_bdevs": 4, 00:12:28.195 "num_base_bdevs_discovered": 4, 00:12:28.195 "num_base_bdevs_operational": 4, 00:12:28.195 "base_bdevs_list": [ 00:12:28.195 { 00:12:28.195 "name": "NewBaseBdev", 00:12:28.195 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:28.195 "is_configured": true, 00:12:28.195 "data_offset": 2048, 00:12:28.195 "data_size": 63488 00:12:28.195 }, 00:12:28.195 { 00:12:28.195 "name": "BaseBdev2", 00:12:28.195 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:28.195 "is_configured": true, 00:12:28.195 "data_offset": 2048, 00:12:28.195 "data_size": 63488 00:12:28.195 }, 00:12:28.195 { 00:12:28.195 "name": "BaseBdev3", 00:12:28.195 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:28.195 "is_configured": true, 00:12:28.195 "data_offset": 2048, 00:12:28.195 "data_size": 63488 00:12:28.195 }, 00:12:28.195 { 00:12:28.195 "name": "BaseBdev4", 00:12:28.195 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:28.195 "is_configured": true, 00:12:28.195 "data_offset": 2048, 00:12:28.195 "data_size": 63488 00:12:28.195 } 00:12:28.195 ] 00:12:28.195 }' 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.195 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 [2024-11-19 10:06:42.764633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.763 "name": "Existed_Raid", 00:12:28.763 "aliases": [ 00:12:28.763 "1a2373a4-415c-48e5-8722-4c5ec8ec95bf" 00:12:28.763 ], 00:12:28.763 "product_name": "Raid Volume", 00:12:28.763 "block_size": 512, 00:12:28.763 "num_blocks": 63488, 00:12:28.763 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:28.763 "assigned_rate_limits": { 00:12:28.763 "rw_ios_per_sec": 0, 00:12:28.763 "rw_mbytes_per_sec": 0, 00:12:28.763 "r_mbytes_per_sec": 0, 00:12:28.763 "w_mbytes_per_sec": 0 00:12:28.763 }, 00:12:28.763 "claimed": false, 00:12:28.763 "zoned": false, 00:12:28.763 "supported_io_types": { 00:12:28.763 "read": true, 00:12:28.763 "write": true, 00:12:28.763 "unmap": false, 00:12:28.763 "flush": false, 00:12:28.763 "reset": true, 00:12:28.763 "nvme_admin": false, 00:12:28.763 "nvme_io": false, 00:12:28.763 "nvme_io_md": false, 00:12:28.763 "write_zeroes": true, 00:12:28.763 "zcopy": false, 00:12:28.763 "get_zone_info": false, 00:12:28.763 "zone_management": false, 00:12:28.763 "zone_append": false, 00:12:28.763 "compare": false, 00:12:28.763 "compare_and_write": false, 00:12:28.763 "abort": false, 00:12:28.763 "seek_hole": false, 00:12:28.763 "seek_data": false, 00:12:28.763 "copy": false, 00:12:28.763 "nvme_iov_md": false 00:12:28.763 }, 00:12:28.763 "memory_domains": [ 00:12:28.763 { 00:12:28.763 "dma_device_id": "system", 00:12:28.763 "dma_device_type": 1 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.763 "dma_device_type": 2 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "system", 00:12:28.763 "dma_device_type": 1 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.763 "dma_device_type": 2 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "system", 00:12:28.763 "dma_device_type": 1 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.763 "dma_device_type": 2 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "system", 00:12:28.763 "dma_device_type": 1 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.763 "dma_device_type": 2 00:12:28.763 } 00:12:28.763 ], 00:12:28.763 "driver_specific": { 00:12:28.763 "raid": { 00:12:28.763 "uuid": "1a2373a4-415c-48e5-8722-4c5ec8ec95bf", 00:12:28.763 "strip_size_kb": 0, 00:12:28.763 "state": "online", 00:12:28.763 "raid_level": "raid1", 00:12:28.763 "superblock": true, 00:12:28.763 "num_base_bdevs": 4, 00:12:28.763 "num_base_bdevs_discovered": 4, 00:12:28.763 "num_base_bdevs_operational": 4, 00:12:28.763 "base_bdevs_list": [ 00:12:28.763 { 00:12:28.763 "name": "NewBaseBdev", 00:12:28.763 "uuid": "f5957d26-1a5f-46d1-bd05-17c970b43570", 00:12:28.763 "is_configured": true, 00:12:28.763 "data_offset": 2048, 00:12:28.763 "data_size": 63488 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "name": "BaseBdev2", 00:12:28.763 "uuid": "b4e059f9-fddf-446d-a665-5a2a344e1386", 00:12:28.763 "is_configured": true, 00:12:28.763 "data_offset": 2048, 00:12:28.763 "data_size": 63488 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "name": "BaseBdev3", 00:12:28.763 "uuid": "3ff1eba1-e362-413d-8d51-612268a56543", 00:12:28.763 "is_configured": true, 00:12:28.763 "data_offset": 2048, 00:12:28.763 "data_size": 63488 00:12:28.763 }, 00:12:28.763 { 00:12:28.763 "name": "BaseBdev4", 00:12:28.763 "uuid": "20cdbd0b-7487-4194-8dec-f459bc92f31e", 00:12:28.763 "is_configured": true, 00:12:28.763 "data_offset": 2048, 00:12:28.763 "data_size": 63488 00:12:28.763 } 00:12:28.763 ] 00:12:28.763 } 00:12:28.763 } 00:12:28.763 }' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:28.763 BaseBdev2 00:12:28.763 BaseBdev3 00:12:28.763 BaseBdev4' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 10:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.023 [2024-11-19 10:06:43.132239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.023 [2024-11-19 10:06:43.132436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.023 [2024-11-19 10:06:43.132571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.023 [2024-11-19 10:06:43.133010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.023 [2024-11-19 10:06:43.133036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73899 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73899 ']' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73899 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73899 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73899' 00:12:29.023 killing process with pid 73899 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73899 00:12:29.023 [2024-11-19 10:06:43.174621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.023 10:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73899 00:12:29.590 [2024-11-19 10:06:43.542718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.527 10:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:30.527 00:12:30.527 real 0m13.170s 00:12:30.527 user 0m21.546s 00:12:30.527 sys 0m2.001s 00:12:30.527 10:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.527 10:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.527 ************************************ 00:12:30.527 END TEST raid_state_function_test_sb 00:12:30.527 ************************************ 00:12:30.527 10:06:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:30.527 10:06:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:30.527 10:06:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.527 10:06:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.527 ************************************ 00:12:30.527 START TEST raid_superblock_test 00:12:30.527 ************************************ 00:12:30.527 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:30.527 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:30.527 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:30.527 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74588 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74588 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74588 ']' 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.528 10:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.787 [2024-11-19 10:06:44.861438] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:12:30.787 [2024-11-19 10:06:44.861902] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74588 ] 00:12:31.045 [2024-11-19 10:06:45.049882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.045 [2024-11-19 10:06:45.192011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.304 [2024-11-19 10:06:45.416514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.304 [2024-11-19 10:06:45.416603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 malloc1 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 [2024-11-19 10:06:45.907012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.903 [2024-11-19 10:06:45.907293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.903 [2024-11-19 10:06:45.907457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:31.903 [2024-11-19 10:06:45.907581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.903 [2024-11-19 10:06:45.910696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.903 [2024-11-19 10:06:45.910900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.903 pt1 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 malloc2 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 [2024-11-19 10:06:45.967351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.903 [2024-11-19 10:06:45.967447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.903 [2024-11-19 10:06:45.967479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:31.903 [2024-11-19 10:06:45.967492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.903 [2024-11-19 10:06:45.970576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.903 [2024-11-19 10:06:45.970618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.903 pt2 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.903 10:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 malloc3 00:12:31.903 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.903 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.903 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.903 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 [2024-11-19 10:06:46.039813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.903 [2024-11-19 10:06:46.039902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.903 [2024-11-19 10:06:46.039940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:31.903 [2024-11-19 10:06:46.039956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.903 [2024-11-19 10:06:46.042979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.903 [2024-11-19 10:06:46.043023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.903 pt3 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.904 malloc4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.904 [2024-11-19 10:06:46.099537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:31.904 [2024-11-19 10:06:46.099612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.904 [2024-11-19 10:06:46.099656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:31.904 [2024-11-19 10:06:46.099670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.904 [2024-11-19 10:06:46.102869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.904 [2024-11-19 10:06:46.102930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:31.904 pt4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.904 [2024-11-19 10:06:46.111738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.904 [2024-11-19 10:06:46.114357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.904 [2024-11-19 10:06:46.114446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.904 [2024-11-19 10:06:46.114511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:31.904 [2024-11-19 10:06:46.114746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:31.904 [2024-11-19 10:06:46.114770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.904 [2024-11-19 10:06:46.115187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.904 [2024-11-19 10:06:46.115454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:31.904 [2024-11-19 10:06:46.115478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:31.904 [2024-11-19 10:06:46.115738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.904 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.164 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.164 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.164 "name": "raid_bdev1", 00:12:32.164 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:32.164 "strip_size_kb": 0, 00:12:32.164 "state": "online", 00:12:32.164 "raid_level": "raid1", 00:12:32.164 "superblock": true, 00:12:32.164 "num_base_bdevs": 4, 00:12:32.164 "num_base_bdevs_discovered": 4, 00:12:32.164 "num_base_bdevs_operational": 4, 00:12:32.164 "base_bdevs_list": [ 00:12:32.164 { 00:12:32.164 "name": "pt1", 00:12:32.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.164 "is_configured": true, 00:12:32.164 "data_offset": 2048, 00:12:32.164 "data_size": 63488 00:12:32.164 }, 00:12:32.164 { 00:12:32.164 "name": "pt2", 00:12:32.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.164 "is_configured": true, 00:12:32.164 "data_offset": 2048, 00:12:32.164 "data_size": 63488 00:12:32.164 }, 00:12:32.164 { 00:12:32.164 "name": "pt3", 00:12:32.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.164 "is_configured": true, 00:12:32.164 "data_offset": 2048, 00:12:32.164 "data_size": 63488 00:12:32.164 }, 00:12:32.164 { 00:12:32.164 "name": "pt4", 00:12:32.164 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.164 "is_configured": true, 00:12:32.164 "data_offset": 2048, 00:12:32.164 "data_size": 63488 00:12:32.164 } 00:12:32.164 ] 00:12:32.164 }' 00:12:32.164 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.164 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.423 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.423 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:32.423 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.423 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.423 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.423 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.683 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.683 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.683 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.683 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.683 [2024-11-19 10:06:46.660415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.683 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.683 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.683 "name": "raid_bdev1", 00:12:32.683 "aliases": [ 00:12:32.683 "8c1d083a-635c-4751-8b42-d34b4b10091b" 00:12:32.683 ], 00:12:32.683 "product_name": "Raid Volume", 00:12:32.683 "block_size": 512, 00:12:32.683 "num_blocks": 63488, 00:12:32.683 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:32.683 "assigned_rate_limits": { 00:12:32.683 "rw_ios_per_sec": 0, 00:12:32.683 "rw_mbytes_per_sec": 0, 00:12:32.683 "r_mbytes_per_sec": 0, 00:12:32.683 "w_mbytes_per_sec": 0 00:12:32.683 }, 00:12:32.683 "claimed": false, 00:12:32.683 "zoned": false, 00:12:32.683 "supported_io_types": { 00:12:32.683 "read": true, 00:12:32.683 "write": true, 00:12:32.683 "unmap": false, 00:12:32.683 "flush": false, 00:12:32.683 "reset": true, 00:12:32.683 "nvme_admin": false, 00:12:32.683 "nvme_io": false, 00:12:32.683 "nvme_io_md": false, 00:12:32.683 "write_zeroes": true, 00:12:32.683 "zcopy": false, 00:12:32.683 "get_zone_info": false, 00:12:32.683 "zone_management": false, 00:12:32.683 "zone_append": false, 00:12:32.683 "compare": false, 00:12:32.683 "compare_and_write": false, 00:12:32.683 "abort": false, 00:12:32.683 "seek_hole": false, 00:12:32.683 "seek_data": false, 00:12:32.683 "copy": false, 00:12:32.683 "nvme_iov_md": false 00:12:32.683 }, 00:12:32.683 "memory_domains": [ 00:12:32.683 { 00:12:32.683 "dma_device_id": "system", 00:12:32.683 "dma_device_type": 1 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.683 "dma_device_type": 2 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "system", 00:12:32.683 "dma_device_type": 1 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.683 "dma_device_type": 2 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "system", 00:12:32.683 "dma_device_type": 1 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.683 "dma_device_type": 2 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "system", 00:12:32.683 "dma_device_type": 1 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.683 "dma_device_type": 2 00:12:32.683 } 00:12:32.683 ], 00:12:32.683 "driver_specific": { 00:12:32.683 "raid": { 00:12:32.683 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:32.683 "strip_size_kb": 0, 00:12:32.683 "state": "online", 00:12:32.683 "raid_level": "raid1", 00:12:32.683 "superblock": true, 00:12:32.683 "num_base_bdevs": 4, 00:12:32.683 "num_base_bdevs_discovered": 4, 00:12:32.683 "num_base_bdevs_operational": 4, 00:12:32.683 "base_bdevs_list": [ 00:12:32.683 { 00:12:32.683 "name": "pt1", 00:12:32.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.683 "is_configured": true, 00:12:32.683 "data_offset": 2048, 00:12:32.683 "data_size": 63488 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "name": "pt2", 00:12:32.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.683 "is_configured": true, 00:12:32.683 "data_offset": 2048, 00:12:32.683 "data_size": 63488 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "name": "pt3", 00:12:32.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.683 "is_configured": true, 00:12:32.683 "data_offset": 2048, 00:12:32.683 "data_size": 63488 00:12:32.683 }, 00:12:32.683 { 00:12:32.683 "name": "pt4", 00:12:32.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.683 "is_configured": true, 00:12:32.683 "data_offset": 2048, 00:12:32.683 "data_size": 63488 00:12:32.683 } 00:12:32.683 ] 00:12:32.683 } 00:12:32.684 } 00:12:32.684 }' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:32.684 pt2 00:12:32.684 pt3 00:12:32.684 pt4' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.684 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:32.944 [2024-11-19 10:06:47.044492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8c1d083a-635c-4751-8b42-d34b4b10091b 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8c1d083a-635c-4751-8b42-d34b4b10091b ']' 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 [2024-11-19 10:06:47.092155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.944 [2024-11-19 10:06:47.092338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.944 [2024-11-19 10:06:47.092579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.944 [2024-11-19 10:06:47.092723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.944 [2024-11-19 10:06:47.092749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.944 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 [2024-11-19 10:06:47.240218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:33.204 [2024-11-19 10:06:47.243205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:33.204 [2024-11-19 10:06:47.243449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:33.204 [2024-11-19 10:06:47.243553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:33.204 [2024-11-19 10:06:47.243738] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:33.204 [2024-11-19 10:06:47.243994] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:33.204 [2024-11-19 10:06:47.244268] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:33.204 [2024-11-19 10:06:47.244499] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:33.204 [2024-11-19 10:06:47.244649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.204 [2024-11-19 10:06:47.244702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:33.204 request: 00:12:33.204 { 00:12:33.204 "name": "raid_bdev1", 00:12:33.204 "raid_level": "raid1", 00:12:33.204 "base_bdevs": [ 00:12:33.204 "malloc1", 00:12:33.204 "malloc2", 00:12:33.204 "malloc3", 00:12:33.204 "malloc4" 00:12:33.204 ], 00:12:33.204 "superblock": false, 00:12:33.204 "method": "bdev_raid_create", 00:12:33.204 "req_id": 1 00:12:33.204 } 00:12:33.204 Got JSON-RPC error response 00:12:33.204 response: 00:12:33.204 { 00:12:33.204 "code": -17, 00:12:33.204 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:33.204 } 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 [2024-11-19 10:06:47.309128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:33.204 [2024-11-19 10:06:47.309346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.204 [2024-11-19 10:06:47.309420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:33.204 [2024-11-19 10:06:47.309532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.204 [2024-11-19 10:06:47.312704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.204 [2024-11-19 10:06:47.312884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:33.204 [2024-11-19 10:06:47.313002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:33.204 [2024-11-19 10:06:47.313080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:33.204 pt1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.204 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.204 "name": "raid_bdev1", 00:12:33.204 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:33.204 "strip_size_kb": 0, 00:12:33.204 "state": "configuring", 00:12:33.205 "raid_level": "raid1", 00:12:33.205 "superblock": true, 00:12:33.205 "num_base_bdevs": 4, 00:12:33.205 "num_base_bdevs_discovered": 1, 00:12:33.205 "num_base_bdevs_operational": 4, 00:12:33.205 "base_bdevs_list": [ 00:12:33.205 { 00:12:33.205 "name": "pt1", 00:12:33.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.205 "is_configured": true, 00:12:33.205 "data_offset": 2048, 00:12:33.205 "data_size": 63488 00:12:33.205 }, 00:12:33.205 { 00:12:33.205 "name": null, 00:12:33.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.205 "is_configured": false, 00:12:33.205 "data_offset": 2048, 00:12:33.205 "data_size": 63488 00:12:33.205 }, 00:12:33.205 { 00:12:33.205 "name": null, 00:12:33.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.205 "is_configured": false, 00:12:33.205 "data_offset": 2048, 00:12:33.205 "data_size": 63488 00:12:33.205 }, 00:12:33.205 { 00:12:33.205 "name": null, 00:12:33.205 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.205 "is_configured": false, 00:12:33.205 "data_offset": 2048, 00:12:33.205 "data_size": 63488 00:12:33.205 } 00:12:33.205 ] 00:12:33.205 }' 00:12:33.205 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.205 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.773 [2024-11-19 10:06:47.837554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.773 [2024-11-19 10:06:47.837668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.773 [2024-11-19 10:06:47.837702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:33.773 [2024-11-19 10:06:47.837722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.773 [2024-11-19 10:06:47.838390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.773 [2024-11-19 10:06:47.838620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.773 [2024-11-19 10:06:47.838756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.773 [2024-11-19 10:06:47.838827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.773 pt2 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.773 [2024-11-19 10:06:47.849550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.773 "name": "raid_bdev1", 00:12:33.773 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:33.773 "strip_size_kb": 0, 00:12:33.773 "state": "configuring", 00:12:33.773 "raid_level": "raid1", 00:12:33.773 "superblock": true, 00:12:33.773 "num_base_bdevs": 4, 00:12:33.773 "num_base_bdevs_discovered": 1, 00:12:33.773 "num_base_bdevs_operational": 4, 00:12:33.773 "base_bdevs_list": [ 00:12:33.773 { 00:12:33.773 "name": "pt1", 00:12:33.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.773 "is_configured": true, 00:12:33.773 "data_offset": 2048, 00:12:33.773 "data_size": 63488 00:12:33.773 }, 00:12:33.773 { 00:12:33.773 "name": null, 00:12:33.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.773 "is_configured": false, 00:12:33.773 "data_offset": 0, 00:12:33.773 "data_size": 63488 00:12:33.773 }, 00:12:33.773 { 00:12:33.773 "name": null, 00:12:33.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.773 "is_configured": false, 00:12:33.773 "data_offset": 2048, 00:12:33.773 "data_size": 63488 00:12:33.773 }, 00:12:33.773 { 00:12:33.773 "name": null, 00:12:33.773 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.773 "is_configured": false, 00:12:33.773 "data_offset": 2048, 00:12:33.773 "data_size": 63488 00:12:33.773 } 00:12:33.773 ] 00:12:33.773 }' 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.773 10:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.340 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:34.340 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.340 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:34.340 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.340 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.340 [2024-11-19 10:06:48.393740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:34.340 [2024-11-19 10:06:48.393873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.340 [2024-11-19 10:06:48.393920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:34.340 [2024-11-19 10:06:48.393939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.341 [2024-11-19 10:06:48.394588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.341 [2024-11-19 10:06:48.394622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:34.341 [2024-11-19 10:06:48.394812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:34.341 [2024-11-19 10:06:48.394866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:34.341 pt2 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.341 [2024-11-19 10:06:48.401698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:34.341 [2024-11-19 10:06:48.401802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.341 [2024-11-19 10:06:48.401846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:34.341 [2024-11-19 10:06:48.401863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.341 [2024-11-19 10:06:48.402436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.341 [2024-11-19 10:06:48.402467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:34.341 [2024-11-19 10:06:48.402597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:34.341 [2024-11-19 10:06:48.402629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:34.341 pt3 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.341 [2024-11-19 10:06:48.409666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:34.341 [2024-11-19 10:06:48.409892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.341 [2024-11-19 10:06:48.409937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:34.341 [2024-11-19 10:06:48.409955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.341 [2024-11-19 10:06:48.410539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.341 [2024-11-19 10:06:48.410572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:34.341 [2024-11-19 10:06:48.410677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:34.341 [2024-11-19 10:06:48.410710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:34.341 [2024-11-19 10:06:48.410938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:34.341 [2024-11-19 10:06:48.410962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.341 [2024-11-19 10:06:48.411292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:34.341 [2024-11-19 10:06:48.411502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:34.341 [2024-11-19 10:06:48.411523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:34.341 [2024-11-19 10:06:48.411740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.341 pt4 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.341 "name": "raid_bdev1", 00:12:34.341 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:34.341 "strip_size_kb": 0, 00:12:34.341 "state": "online", 00:12:34.341 "raid_level": "raid1", 00:12:34.341 "superblock": true, 00:12:34.341 "num_base_bdevs": 4, 00:12:34.341 "num_base_bdevs_discovered": 4, 00:12:34.341 "num_base_bdevs_operational": 4, 00:12:34.341 "base_bdevs_list": [ 00:12:34.341 { 00:12:34.341 "name": "pt1", 00:12:34.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.341 "is_configured": true, 00:12:34.341 "data_offset": 2048, 00:12:34.341 "data_size": 63488 00:12:34.341 }, 00:12:34.341 { 00:12:34.341 "name": "pt2", 00:12:34.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.341 "is_configured": true, 00:12:34.341 "data_offset": 2048, 00:12:34.341 "data_size": 63488 00:12:34.341 }, 00:12:34.341 { 00:12:34.341 "name": "pt3", 00:12:34.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.341 "is_configured": true, 00:12:34.341 "data_offset": 2048, 00:12:34.341 "data_size": 63488 00:12:34.341 }, 00:12:34.341 { 00:12:34.341 "name": "pt4", 00:12:34.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.341 "is_configured": true, 00:12:34.341 "data_offset": 2048, 00:12:34.341 "data_size": 63488 00:12:34.341 } 00:12:34.341 ] 00:12:34.341 }' 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.341 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.908 [2024-11-19 10:06:48.954365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.908 10:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.908 "name": "raid_bdev1", 00:12:34.908 "aliases": [ 00:12:34.908 "8c1d083a-635c-4751-8b42-d34b4b10091b" 00:12:34.908 ], 00:12:34.908 "product_name": "Raid Volume", 00:12:34.908 "block_size": 512, 00:12:34.908 "num_blocks": 63488, 00:12:34.908 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:34.908 "assigned_rate_limits": { 00:12:34.908 "rw_ios_per_sec": 0, 00:12:34.908 "rw_mbytes_per_sec": 0, 00:12:34.909 "r_mbytes_per_sec": 0, 00:12:34.909 "w_mbytes_per_sec": 0 00:12:34.909 }, 00:12:34.909 "claimed": false, 00:12:34.909 "zoned": false, 00:12:34.909 "supported_io_types": { 00:12:34.909 "read": true, 00:12:34.909 "write": true, 00:12:34.909 "unmap": false, 00:12:34.909 "flush": false, 00:12:34.909 "reset": true, 00:12:34.909 "nvme_admin": false, 00:12:34.909 "nvme_io": false, 00:12:34.909 "nvme_io_md": false, 00:12:34.909 "write_zeroes": true, 00:12:34.909 "zcopy": false, 00:12:34.909 "get_zone_info": false, 00:12:34.909 "zone_management": false, 00:12:34.909 "zone_append": false, 00:12:34.909 "compare": false, 00:12:34.909 "compare_and_write": false, 00:12:34.909 "abort": false, 00:12:34.909 "seek_hole": false, 00:12:34.909 "seek_data": false, 00:12:34.909 "copy": false, 00:12:34.909 "nvme_iov_md": false 00:12:34.909 }, 00:12:34.909 "memory_domains": [ 00:12:34.909 { 00:12:34.909 "dma_device_id": "system", 00:12:34.909 "dma_device_type": 1 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.909 "dma_device_type": 2 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "system", 00:12:34.909 "dma_device_type": 1 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.909 "dma_device_type": 2 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "system", 00:12:34.909 "dma_device_type": 1 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.909 "dma_device_type": 2 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "system", 00:12:34.909 "dma_device_type": 1 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.909 "dma_device_type": 2 00:12:34.909 } 00:12:34.909 ], 00:12:34.909 "driver_specific": { 00:12:34.909 "raid": { 00:12:34.909 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:34.909 "strip_size_kb": 0, 00:12:34.909 "state": "online", 00:12:34.909 "raid_level": "raid1", 00:12:34.909 "superblock": true, 00:12:34.909 "num_base_bdevs": 4, 00:12:34.909 "num_base_bdevs_discovered": 4, 00:12:34.909 "num_base_bdevs_operational": 4, 00:12:34.909 "base_bdevs_list": [ 00:12:34.909 { 00:12:34.909 "name": "pt1", 00:12:34.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.909 "is_configured": true, 00:12:34.909 "data_offset": 2048, 00:12:34.909 "data_size": 63488 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "name": "pt2", 00:12:34.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.909 "is_configured": true, 00:12:34.909 "data_offset": 2048, 00:12:34.909 "data_size": 63488 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "name": "pt3", 00:12:34.909 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.909 "is_configured": true, 00:12:34.909 "data_offset": 2048, 00:12:34.909 "data_size": 63488 00:12:34.909 }, 00:12:34.909 { 00:12:34.909 "name": "pt4", 00:12:34.909 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.909 "is_configured": true, 00:12:34.909 "data_offset": 2048, 00:12:34.909 "data_size": 63488 00:12:34.909 } 00:12:34.909 ] 00:12:34.909 } 00:12:34.909 } 00:12:34.909 }' 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:34.909 pt2 00:12:34.909 pt3 00:12:34.909 pt4' 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.909 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 [2024-11-19 10:06:49.334410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8c1d083a-635c-4751-8b42-d34b4b10091b '!=' 8c1d083a-635c-4751-8b42-d34b4b10091b ']' 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 [2024-11-19 10:06:49.386149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.433 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.433 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.433 "name": "raid_bdev1", 00:12:35.433 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:35.433 "strip_size_kb": 0, 00:12:35.433 "state": "online", 00:12:35.433 "raid_level": "raid1", 00:12:35.433 "superblock": true, 00:12:35.433 "num_base_bdevs": 4, 00:12:35.433 "num_base_bdevs_discovered": 3, 00:12:35.433 "num_base_bdevs_operational": 3, 00:12:35.433 "base_bdevs_list": [ 00:12:35.433 { 00:12:35.433 "name": null, 00:12:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.433 "is_configured": false, 00:12:35.433 "data_offset": 0, 00:12:35.433 "data_size": 63488 00:12:35.433 }, 00:12:35.433 { 00:12:35.433 "name": "pt2", 00:12:35.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.433 "is_configured": true, 00:12:35.433 "data_offset": 2048, 00:12:35.433 "data_size": 63488 00:12:35.433 }, 00:12:35.433 { 00:12:35.433 "name": "pt3", 00:12:35.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.433 "is_configured": true, 00:12:35.433 "data_offset": 2048, 00:12:35.433 "data_size": 63488 00:12:35.433 }, 00:12:35.433 { 00:12:35.433 "name": "pt4", 00:12:35.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.433 "is_configured": true, 00:12:35.434 "data_offset": 2048, 00:12:35.434 "data_size": 63488 00:12:35.434 } 00:12:35.434 ] 00:12:35.434 }' 00:12:35.434 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.434 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 [2024-11-19 10:06:49.898222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.693 [2024-11-19 10:06:49.898265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.693 [2024-11-19 10:06:49.898377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.693 [2024-11-19 10:06:49.898487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.693 [2024-11-19 10:06:49.898504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.953 [2024-11-19 10:06:49.990217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:35.953 [2024-11-19 10:06:49.990322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.953 [2024-11-19 10:06:49.990355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:35.953 [2024-11-19 10:06:49.990369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.953 [2024-11-19 10:06:49.993746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.953 [2024-11-19 10:06:49.993823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:35.953 [2024-11-19 10:06:49.993988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:35.953 [2024-11-19 10:06:49.994063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.953 pt2 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.953 10:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.953 "name": "raid_bdev1", 00:12:35.953 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:35.953 "strip_size_kb": 0, 00:12:35.953 "state": "configuring", 00:12:35.953 "raid_level": "raid1", 00:12:35.953 "superblock": true, 00:12:35.953 "num_base_bdevs": 4, 00:12:35.953 "num_base_bdevs_discovered": 1, 00:12:35.953 "num_base_bdevs_operational": 3, 00:12:35.953 "base_bdevs_list": [ 00:12:35.953 { 00:12:35.953 "name": null, 00:12:35.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.953 "is_configured": false, 00:12:35.953 "data_offset": 2048, 00:12:35.953 "data_size": 63488 00:12:35.953 }, 00:12:35.953 { 00:12:35.953 "name": "pt2", 00:12:35.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.953 "is_configured": true, 00:12:35.953 "data_offset": 2048, 00:12:35.953 "data_size": 63488 00:12:35.953 }, 00:12:35.953 { 00:12:35.953 "name": null, 00:12:35.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.953 "is_configured": false, 00:12:35.953 "data_offset": 2048, 00:12:35.953 "data_size": 63488 00:12:35.953 }, 00:12:35.953 { 00:12:35.953 "name": null, 00:12:35.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.953 "is_configured": false, 00:12:35.953 "data_offset": 2048, 00:12:35.953 "data_size": 63488 00:12:35.953 } 00:12:35.953 ] 00:12:35.953 }' 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.953 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.520 [2024-11-19 10:06:50.514573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:36.520 [2024-11-19 10:06:50.514699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.520 [2024-11-19 10:06:50.514737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:36.520 [2024-11-19 10:06:50.514765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.520 [2024-11-19 10:06:50.515512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.520 [2024-11-19 10:06:50.515580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:36.520 [2024-11-19 10:06:50.515717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:36.520 [2024-11-19 10:06:50.515752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:36.520 pt3 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.520 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.520 "name": "raid_bdev1", 00:12:36.520 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:36.520 "strip_size_kb": 0, 00:12:36.520 "state": "configuring", 00:12:36.520 "raid_level": "raid1", 00:12:36.520 "superblock": true, 00:12:36.520 "num_base_bdevs": 4, 00:12:36.520 "num_base_bdevs_discovered": 2, 00:12:36.520 "num_base_bdevs_operational": 3, 00:12:36.520 "base_bdevs_list": [ 00:12:36.520 { 00:12:36.520 "name": null, 00:12:36.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.520 "is_configured": false, 00:12:36.520 "data_offset": 2048, 00:12:36.520 "data_size": 63488 00:12:36.520 }, 00:12:36.520 { 00:12:36.520 "name": "pt2", 00:12:36.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.520 "is_configured": true, 00:12:36.520 "data_offset": 2048, 00:12:36.520 "data_size": 63488 00:12:36.520 }, 00:12:36.520 { 00:12:36.520 "name": "pt3", 00:12:36.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.520 "is_configured": true, 00:12:36.520 "data_offset": 2048, 00:12:36.520 "data_size": 63488 00:12:36.520 }, 00:12:36.520 { 00:12:36.520 "name": null, 00:12:36.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:36.520 "is_configured": false, 00:12:36.520 "data_offset": 2048, 00:12:36.520 "data_size": 63488 00:12:36.520 } 00:12:36.520 ] 00:12:36.520 }' 00:12:36.521 10:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.521 10:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.088 [2024-11-19 10:06:51.022746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:37.088 [2024-11-19 10:06:51.022882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.088 [2024-11-19 10:06:51.022934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:37.088 [2024-11-19 10:06:51.022951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.088 [2024-11-19 10:06:51.023620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.088 [2024-11-19 10:06:51.023651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:37.088 [2024-11-19 10:06:51.023780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:37.088 [2024-11-19 10:06:51.023856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:37.088 [2024-11-19 10:06:51.024056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:37.088 [2024-11-19 10:06:51.024075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.088 [2024-11-19 10:06:51.024426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:37.088 [2024-11-19 10:06:51.024614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:37.088 [2024-11-19 10:06:51.024634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:37.088 [2024-11-19 10:06:51.024837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.088 pt4 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.088 "name": "raid_bdev1", 00:12:37.088 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:37.088 "strip_size_kb": 0, 00:12:37.088 "state": "online", 00:12:37.088 "raid_level": "raid1", 00:12:37.088 "superblock": true, 00:12:37.088 "num_base_bdevs": 4, 00:12:37.088 "num_base_bdevs_discovered": 3, 00:12:37.088 "num_base_bdevs_operational": 3, 00:12:37.088 "base_bdevs_list": [ 00:12:37.088 { 00:12:37.088 "name": null, 00:12:37.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.088 "is_configured": false, 00:12:37.088 "data_offset": 2048, 00:12:37.088 "data_size": 63488 00:12:37.088 }, 00:12:37.088 { 00:12:37.088 "name": "pt2", 00:12:37.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.088 "is_configured": true, 00:12:37.088 "data_offset": 2048, 00:12:37.088 "data_size": 63488 00:12:37.088 }, 00:12:37.088 { 00:12:37.088 "name": "pt3", 00:12:37.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.088 "is_configured": true, 00:12:37.088 "data_offset": 2048, 00:12:37.088 "data_size": 63488 00:12:37.088 }, 00:12:37.088 { 00:12:37.088 "name": "pt4", 00:12:37.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.088 "is_configured": true, 00:12:37.088 "data_offset": 2048, 00:12:37.088 "data_size": 63488 00:12:37.088 } 00:12:37.088 ] 00:12:37.088 }' 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.088 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.347 [2024-11-19 10:06:51.546861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.347 [2024-11-19 10:06:51.546900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.347 [2024-11-19 10:06:51.547011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.347 [2024-11-19 10:06:51.547118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.347 [2024-11-19 10:06:51.547139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.347 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 [2024-11-19 10:06:51.622852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:37.605 [2024-11-19 10:06:51.623081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.605 [2024-11-19 10:06:51.623246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:37.605 [2024-11-19 10:06:51.623393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.605 [2024-11-19 10:06:51.626731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.605 [2024-11-19 10:06:51.626929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:37.605 [2024-11-19 10:06:51.627153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:37.605 [2024-11-19 10:06:51.627337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:37.605 [2024-11-19 10:06:51.627699] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:37.605 [2024-11-19 10:06:51.627901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.605 pt1 00:12:37.605 [2024-11-19 10:06:51.628075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:37.605 [2024-11-19 10:06:51.628174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:37.605 [2024-11-19 10:06:51.628332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.605 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.606 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.606 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.606 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.606 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.606 "name": "raid_bdev1", 00:12:37.606 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:37.606 "strip_size_kb": 0, 00:12:37.606 "state": "configuring", 00:12:37.606 "raid_level": "raid1", 00:12:37.606 "superblock": true, 00:12:37.606 "num_base_bdevs": 4, 00:12:37.606 "num_base_bdevs_discovered": 2, 00:12:37.606 "num_base_bdevs_operational": 3, 00:12:37.606 "base_bdevs_list": [ 00:12:37.606 { 00:12:37.606 "name": null, 00:12:37.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.606 "is_configured": false, 00:12:37.606 "data_offset": 2048, 00:12:37.606 "data_size": 63488 00:12:37.606 }, 00:12:37.606 { 00:12:37.606 "name": "pt2", 00:12:37.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.606 "is_configured": true, 00:12:37.606 "data_offset": 2048, 00:12:37.606 "data_size": 63488 00:12:37.606 }, 00:12:37.606 { 00:12:37.606 "name": "pt3", 00:12:37.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.606 "is_configured": true, 00:12:37.606 "data_offset": 2048, 00:12:37.606 "data_size": 63488 00:12:37.606 }, 00:12:37.606 { 00:12:37.606 "name": null, 00:12:37.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.606 "is_configured": false, 00:12:37.606 "data_offset": 2048, 00:12:37.606 "data_size": 63488 00:12:37.606 } 00:12:37.606 ] 00:12:37.606 }' 00:12:37.606 10:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.606 10:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.173 [2024-11-19 10:06:52.187449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:38.173 [2024-11-19 10:06:52.187696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.173 [2024-11-19 10:06:52.187745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:38.173 [2024-11-19 10:06:52.187763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.173 [2024-11-19 10:06:52.188443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.173 [2024-11-19 10:06:52.188468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:38.173 [2024-11-19 10:06:52.188610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:38.173 [2024-11-19 10:06:52.188653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:38.173 [2024-11-19 10:06:52.188833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:38.173 [2024-11-19 10:06:52.188850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.173 [2024-11-19 10:06:52.189224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:38.173 [2024-11-19 10:06:52.189438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:38.173 [2024-11-19 10:06:52.189466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:38.173 [2024-11-19 10:06:52.189662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.173 pt4 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.173 "name": "raid_bdev1", 00:12:38.173 "uuid": "8c1d083a-635c-4751-8b42-d34b4b10091b", 00:12:38.173 "strip_size_kb": 0, 00:12:38.173 "state": "online", 00:12:38.173 "raid_level": "raid1", 00:12:38.173 "superblock": true, 00:12:38.173 "num_base_bdevs": 4, 00:12:38.173 "num_base_bdevs_discovered": 3, 00:12:38.173 "num_base_bdevs_operational": 3, 00:12:38.173 "base_bdevs_list": [ 00:12:38.173 { 00:12:38.173 "name": null, 00:12:38.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.173 "is_configured": false, 00:12:38.173 "data_offset": 2048, 00:12:38.173 "data_size": 63488 00:12:38.173 }, 00:12:38.173 { 00:12:38.173 "name": "pt2", 00:12:38.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.173 "is_configured": true, 00:12:38.173 "data_offset": 2048, 00:12:38.173 "data_size": 63488 00:12:38.173 }, 00:12:38.173 { 00:12:38.173 "name": "pt3", 00:12:38.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.173 "is_configured": true, 00:12:38.173 "data_offset": 2048, 00:12:38.173 "data_size": 63488 00:12:38.173 }, 00:12:38.173 { 00:12:38.173 "name": "pt4", 00:12:38.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.173 "is_configured": true, 00:12:38.173 "data_offset": 2048, 00:12:38.173 "data_size": 63488 00:12:38.173 } 00:12:38.173 ] 00:12:38.173 }' 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.173 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:38.742 [2024-11-19 10:06:52.772127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8c1d083a-635c-4751-8b42-d34b4b10091b '!=' 8c1d083a-635c-4751-8b42-d34b4b10091b ']' 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74588 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74588 ']' 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74588 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74588 00:12:38.742 killing process with pid 74588 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74588' 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74588 00:12:38.742 [2024-11-19 10:06:52.853740] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.742 10:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74588 00:12:38.742 [2024-11-19 10:06:52.853920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.742 [2024-11-19 10:06:52.854029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.742 [2024-11-19 10:06:52.854051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:39.001 [2024-11-19 10:06:53.204029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.400 10:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:40.400 00:12:40.400 real 0m9.578s 00:12:40.400 user 0m15.584s 00:12:40.400 sys 0m1.500s 00:12:40.400 ************************************ 00:12:40.400 10:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.400 10:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.400 END TEST raid_superblock_test 00:12:40.400 ************************************ 00:12:40.400 10:06:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:40.400 10:06:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:40.400 10:06:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.400 10:06:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.400 ************************************ 00:12:40.400 START TEST raid_read_error_test 00:12:40.400 ************************************ 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UIvkq5bRqb 00:12:40.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75082 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75082 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75082 ']' 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.400 10:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.400 [2024-11-19 10:06:54.504703] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:12:40.400 [2024-11-19 10:06:54.505157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 00:12:40.659 [2024-11-19 10:06:54.681699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.659 [2024-11-19 10:06:54.836653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.917 [2024-11-19 10:06:55.069193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.917 [2024-11-19 10:06:55.069613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 BaseBdev1_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 true 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 [2024-11-19 10:06:55.620477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:41.485 [2024-11-19 10:06:55.620709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.485 [2024-11-19 10:06:55.620755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:41.485 [2024-11-19 10:06:55.620777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.485 [2024-11-19 10:06:55.623998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.485 [2024-11-19 10:06:55.624053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.485 BaseBdev1 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 BaseBdev2_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 true 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 [2024-11-19 10:06:55.687005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:41.485 [2024-11-19 10:06:55.687141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.485 [2024-11-19 10:06:55.687170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:41.485 [2024-11-19 10:06:55.687189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.485 [2024-11-19 10:06:55.690304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.485 [2024-11-19 10:06:55.690367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.485 BaseBdev2 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.485 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.744 BaseBdev3_malloc 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.744 true 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.744 [2024-11-19 10:06:55.769596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:41.744 [2024-11-19 10:06:55.769690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.744 [2024-11-19 10:06:55.769732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:41.744 [2024-11-19 10:06:55.769753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.744 [2024-11-19 10:06:55.773026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.744 [2024-11-19 10:06:55.773123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:41.744 BaseBdev3 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.744 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 BaseBdev4_malloc 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 true 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 [2024-11-19 10:06:55.844939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:41.745 [2024-11-19 10:06:55.845032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.745 [2024-11-19 10:06:55.845072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:41.745 [2024-11-19 10:06:55.845093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.745 [2024-11-19 10:06:55.848516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.745 [2024-11-19 10:06:55.848583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:41.745 BaseBdev4 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 [2024-11-19 10:06:55.857087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.745 [2024-11-19 10:06:55.860103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.745 [2024-11-19 10:06:55.860373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.745 [2024-11-19 10:06:55.860538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.745 [2024-11-19 10:06:55.860947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:41.745 [2024-11-19 10:06:55.861093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.745 [2024-11-19 10:06:55.861519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:41.745 [2024-11-19 10:06:55.861918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:41.745 [2024-11-19 10:06:55.862046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:41.745 [2024-11-19 10:06:55.862479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.745 "name": "raid_bdev1", 00:12:41.745 "uuid": "8d81c31e-1951-4f1c-91c6-ed3884b28668", 00:12:41.745 "strip_size_kb": 0, 00:12:41.745 "state": "online", 00:12:41.745 "raid_level": "raid1", 00:12:41.745 "superblock": true, 00:12:41.745 "num_base_bdevs": 4, 00:12:41.745 "num_base_bdevs_discovered": 4, 00:12:41.745 "num_base_bdevs_operational": 4, 00:12:41.745 "base_bdevs_list": [ 00:12:41.745 { 00:12:41.745 "name": "BaseBdev1", 00:12:41.745 "uuid": "351116f4-5508-5b99-96ba-8cd037355f84", 00:12:41.745 "is_configured": true, 00:12:41.745 "data_offset": 2048, 00:12:41.745 "data_size": 63488 00:12:41.745 }, 00:12:41.745 { 00:12:41.745 "name": "BaseBdev2", 00:12:41.745 "uuid": "84bd2b1d-e01f-5139-83d5-f95a567fc653", 00:12:41.745 "is_configured": true, 00:12:41.745 "data_offset": 2048, 00:12:41.745 "data_size": 63488 00:12:41.745 }, 00:12:41.745 { 00:12:41.745 "name": "BaseBdev3", 00:12:41.745 "uuid": "88c6c0b1-6794-590a-9614-bcf32eebb8fd", 00:12:41.745 "is_configured": true, 00:12:41.745 "data_offset": 2048, 00:12:41.745 "data_size": 63488 00:12:41.745 }, 00:12:41.745 { 00:12:41.745 "name": "BaseBdev4", 00:12:41.745 "uuid": "b39ace9b-6ec7-5bcd-b3a0-dc2467f45f85", 00:12:41.745 "is_configured": true, 00:12:41.745 "data_offset": 2048, 00:12:41.745 "data_size": 63488 00:12:41.745 } 00:12:41.745 ] 00:12:41.745 }' 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.745 10:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.312 10:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:42.312 10:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:42.312 [2024-11-19 10:06:56.526870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.256 "name": "raid_bdev1", 00:12:43.256 "uuid": "8d81c31e-1951-4f1c-91c6-ed3884b28668", 00:12:43.256 "strip_size_kb": 0, 00:12:43.256 "state": "online", 00:12:43.256 "raid_level": "raid1", 00:12:43.256 "superblock": true, 00:12:43.256 "num_base_bdevs": 4, 00:12:43.256 "num_base_bdevs_discovered": 4, 00:12:43.256 "num_base_bdevs_operational": 4, 00:12:43.256 "base_bdevs_list": [ 00:12:43.256 { 00:12:43.256 "name": "BaseBdev1", 00:12:43.256 "uuid": "351116f4-5508-5b99-96ba-8cd037355f84", 00:12:43.256 "is_configured": true, 00:12:43.256 "data_offset": 2048, 00:12:43.256 "data_size": 63488 00:12:43.256 }, 00:12:43.256 { 00:12:43.256 "name": "BaseBdev2", 00:12:43.256 "uuid": "84bd2b1d-e01f-5139-83d5-f95a567fc653", 00:12:43.256 "is_configured": true, 00:12:43.256 "data_offset": 2048, 00:12:43.256 "data_size": 63488 00:12:43.256 }, 00:12:43.256 { 00:12:43.256 "name": "BaseBdev3", 00:12:43.256 "uuid": "88c6c0b1-6794-590a-9614-bcf32eebb8fd", 00:12:43.256 "is_configured": true, 00:12:43.256 "data_offset": 2048, 00:12:43.256 "data_size": 63488 00:12:43.256 }, 00:12:43.256 { 00:12:43.256 "name": "BaseBdev4", 00:12:43.256 "uuid": "b39ace9b-6ec7-5bcd-b3a0-dc2467f45f85", 00:12:43.256 "is_configured": true, 00:12:43.256 "data_offset": 2048, 00:12:43.256 "data_size": 63488 00:12:43.256 } 00:12:43.256 ] 00:12:43.256 }' 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.256 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.824 [2024-11-19 10:06:57.913871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.824 [2024-11-19 10:06:57.913915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.824 [2024-11-19 10:06:57.917448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.824 [2024-11-19 10:06:57.917537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.824 [2024-11-19 10:06:57.917714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.824 [2024-11-19 10:06:57.917737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:43.824 { 00:12:43.824 "results": [ 00:12:43.824 { 00:12:43.824 "job": "raid_bdev1", 00:12:43.824 "core_mask": "0x1", 00:12:43.824 "workload": "randrw", 00:12:43.824 "percentage": 50, 00:12:43.824 "status": "finished", 00:12:43.824 "queue_depth": 1, 00:12:43.824 "io_size": 131072, 00:12:43.824 "runtime": 1.384337, 00:12:43.824 "iops": 6392.229637725496, 00:12:43.824 "mibps": 799.028704715687, 00:12:43.824 "io_failed": 0, 00:12:43.824 "io_timeout": 0, 00:12:43.824 "avg_latency_us": 152.05322758606516, 00:12:43.824 "min_latency_us": 40.49454545454545, 00:12:43.824 "max_latency_us": 1861.8181818181818 00:12:43.824 } 00:12:43.824 ], 00:12:43.824 "core_count": 1 00:12:43.824 } 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75082 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75082 ']' 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75082 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75082 00:12:43.824 killing process with pid 75082 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75082' 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75082 00:12:43.824 10:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75082 00:12:43.824 [2024-11-19 10:06:57.948804] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.083 [2024-11-19 10:06:58.264264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UIvkq5bRqb 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:45.461 00:12:45.461 real 0m5.096s 00:12:45.461 user 0m6.202s 00:12:45.461 sys 0m0.699s 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.461 10:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.461 ************************************ 00:12:45.461 END TEST raid_read_error_test 00:12:45.461 ************************************ 00:12:45.461 10:06:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:45.461 10:06:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:45.461 10:06:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.461 10:06:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.461 ************************************ 00:12:45.461 START TEST raid_write_error_test 00:12:45.461 ************************************ 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:45.461 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9HYsJUedoj 00:12:45.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75233 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75233 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75233 ']' 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.462 10:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.462 [2024-11-19 10:06:59.650953] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:12:45.462 [2024-11-19 10:06:59.651165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75233 ] 00:12:45.721 [2024-11-19 10:06:59.837922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.980 [2024-11-19 10:07:00.020166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.240 [2024-11-19 10:07:00.255802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.240 [2024-11-19 10:07:00.255915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.500 BaseBdev1_malloc 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.500 true 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.500 [2024-11-19 10:07:00.703182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:46.500 [2024-11-19 10:07:00.703268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.500 [2024-11-19 10:07:00.703298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:46.500 [2024-11-19 10:07:00.703315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.500 [2024-11-19 10:07:00.706552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.500 [2024-11-19 10:07:00.706614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:46.500 BaseBdev1 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.500 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 BaseBdev2_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 true 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 [2024-11-19 10:07:00.776925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:46.761 [2024-11-19 10:07:00.777012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.761 [2024-11-19 10:07:00.777044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:46.761 [2024-11-19 10:07:00.777063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.761 [2024-11-19 10:07:00.780166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.761 [2024-11-19 10:07:00.780217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:46.761 BaseBdev2 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 BaseBdev3_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 true 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 [2024-11-19 10:07:00.850841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:46.761 [2024-11-19 10:07:00.850953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.761 [2024-11-19 10:07:00.850999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:46.761 [2024-11-19 10:07:00.851017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.761 [2024-11-19 10:07:00.854174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.761 [2024-11-19 10:07:00.854237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:46.761 BaseBdev3 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 BaseBdev4_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 true 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 [2024-11-19 10:07:00.920585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:46.761 [2024-11-19 10:07:00.920657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.761 [2024-11-19 10:07:00.920694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:46.761 [2024-11-19 10:07:00.920711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.761 [2024-11-19 10:07:00.923801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.761 [2024-11-19 10:07:00.923885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:46.761 BaseBdev4 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 [2024-11-19 10:07:00.932858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.761 [2024-11-19 10:07:00.935657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.761 [2024-11-19 10:07:00.935815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.761 [2024-11-19 10:07:00.935956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.761 [2024-11-19 10:07:00.936307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:46.761 [2024-11-19 10:07:00.936378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.761 [2024-11-19 10:07:00.936722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:46.761 [2024-11-19 10:07:00.937003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:46.761 [2024-11-19 10:07:00.937020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:46.761 [2024-11-19 10:07:00.937356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.021 "name": "raid_bdev1", 00:12:47.021 "uuid": "58cbd1c8-4b38-4ff7-8658-0630624b4bb2", 00:12:47.021 "strip_size_kb": 0, 00:12:47.021 "state": "online", 00:12:47.021 "raid_level": "raid1", 00:12:47.021 "superblock": true, 00:12:47.021 "num_base_bdevs": 4, 00:12:47.021 "num_base_bdevs_discovered": 4, 00:12:47.021 "num_base_bdevs_operational": 4, 00:12:47.021 "base_bdevs_list": [ 00:12:47.021 { 00:12:47.021 "name": "BaseBdev1", 00:12:47.021 "uuid": "e8eaa454-501f-5c9d-9b29-8686303eca21", 00:12:47.021 "is_configured": true, 00:12:47.021 "data_offset": 2048, 00:12:47.021 "data_size": 63488 00:12:47.021 }, 00:12:47.021 { 00:12:47.021 "name": "BaseBdev2", 00:12:47.021 "uuid": "14b872cc-e63c-5291-bdec-482f93d13e10", 00:12:47.021 "is_configured": true, 00:12:47.021 "data_offset": 2048, 00:12:47.021 "data_size": 63488 00:12:47.021 }, 00:12:47.021 { 00:12:47.021 "name": "BaseBdev3", 00:12:47.021 "uuid": "3be853d2-f23d-5b9f-8356-1d2985d072a6", 00:12:47.021 "is_configured": true, 00:12:47.021 "data_offset": 2048, 00:12:47.021 "data_size": 63488 00:12:47.021 }, 00:12:47.021 { 00:12:47.021 "name": "BaseBdev4", 00:12:47.021 "uuid": "fb1d7a64-5f6e-5c97-972a-5bbf3dbb60b9", 00:12:47.021 "is_configured": true, 00:12:47.021 "data_offset": 2048, 00:12:47.021 "data_size": 63488 00:12:47.021 } 00:12:47.021 ] 00:12:47.021 }' 00:12:47.021 10:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.021 10:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 10:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:47.280 10:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:47.540 [2024-11-19 10:07:01.587234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.478 [2024-11-19 10:07:02.481808] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:48.478 [2024-11-19 10:07:02.481892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.478 [2024-11-19 10:07:02.482312] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.478 "name": "raid_bdev1", 00:12:48.478 "uuid": "58cbd1c8-4b38-4ff7-8658-0630624b4bb2", 00:12:48.478 "strip_size_kb": 0, 00:12:48.478 "state": "online", 00:12:48.478 "raid_level": "raid1", 00:12:48.478 "superblock": true, 00:12:48.478 "num_base_bdevs": 4, 00:12:48.478 "num_base_bdevs_discovered": 3, 00:12:48.478 "num_base_bdevs_operational": 3, 00:12:48.478 "base_bdevs_list": [ 00:12:48.478 { 00:12:48.478 "name": null, 00:12:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.478 "is_configured": false, 00:12:48.478 "data_offset": 0, 00:12:48.478 "data_size": 63488 00:12:48.478 }, 00:12:48.478 { 00:12:48.478 "name": "BaseBdev2", 00:12:48.478 "uuid": "14b872cc-e63c-5291-bdec-482f93d13e10", 00:12:48.478 "is_configured": true, 00:12:48.478 "data_offset": 2048, 00:12:48.478 "data_size": 63488 00:12:48.478 }, 00:12:48.478 { 00:12:48.478 "name": "BaseBdev3", 00:12:48.478 "uuid": "3be853d2-f23d-5b9f-8356-1d2985d072a6", 00:12:48.478 "is_configured": true, 00:12:48.478 "data_offset": 2048, 00:12:48.478 "data_size": 63488 00:12:48.478 }, 00:12:48.478 { 00:12:48.478 "name": "BaseBdev4", 00:12:48.478 "uuid": "fb1d7a64-5f6e-5c97-972a-5bbf3dbb60b9", 00:12:48.478 "is_configured": true, 00:12:48.478 "data_offset": 2048, 00:12:48.478 "data_size": 63488 00:12:48.478 } 00:12:48.478 ] 00:12:48.478 }' 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.478 10:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.046 [2024-11-19 10:07:03.029325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.046 [2024-11-19 10:07:03.029364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.046 [2024-11-19 10:07:03.032843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.046 [2024-11-19 10:07:03.032913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.046 [2024-11-19 10:07:03.033077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.046 [2024-11-19 10:07:03.033100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:49.046 { 00:12:49.046 "results": [ 00:12:49.046 { 00:12:49.046 "job": "raid_bdev1", 00:12:49.046 "core_mask": "0x1", 00:12:49.046 "workload": "randrw", 00:12:49.046 "percentage": 50, 00:12:49.046 "status": "finished", 00:12:49.046 "queue_depth": 1, 00:12:49.046 "io_size": 131072, 00:12:49.046 "runtime": 1.438929, 00:12:49.046 "iops": 6686.917839587638, 00:12:49.046 "mibps": 835.8647299484547, 00:12:49.046 "io_failed": 0, 00:12:49.046 "io_timeout": 0, 00:12:49.046 "avg_latency_us": 144.77094839477715, 00:12:49.046 "min_latency_us": 38.63272727272727, 00:12:49.046 "max_latency_us": 2055.447272727273 00:12:49.046 } 00:12:49.046 ], 00:12:49.046 "core_count": 1 00:12:49.046 } 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75233 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75233 ']' 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75233 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75233 00:12:49.046 killing process with pid 75233 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75233' 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75233 00:12:49.046 [2024-11-19 10:07:03.065699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.046 10:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75233 00:12:49.305 [2024-11-19 10:07:03.409420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9HYsJUedoj 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:50.684 00:12:50.684 real 0m5.105s 00:12:50.684 user 0m6.159s 00:12:50.684 sys 0m0.697s 00:12:50.684 ************************************ 00:12:50.684 END TEST raid_write_error_test 00:12:50.684 ************************************ 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.684 10:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 10:07:04 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:50.684 10:07:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:50.684 10:07:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:50.684 10:07:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:50.684 10:07:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.684 10:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 ************************************ 00:12:50.684 START TEST raid_rebuild_test 00:12:50.684 ************************************ 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:50.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75377 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75377 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75377 ']' 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.684 10:07:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 [2024-11-19 10:07:04.812528] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:12:50.684 [2024-11-19 10:07:04.813016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75377 ] 00:12:50.684 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.684 Zero copy mechanism will not be used. 00:12:50.944 [2024-11-19 10:07:05.002619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.944 [2024-11-19 10:07:05.153009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.203 [2024-11-19 10:07:05.384800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.203 [2024-11-19 10:07:05.384908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.772 BaseBdev1_malloc 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.772 [2024-11-19 10:07:05.979854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.772 [2024-11-19 10:07:05.979984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.772 [2024-11-19 10:07:05.980022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:51.772 [2024-11-19 10:07:05.980053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.772 [2024-11-19 10:07:05.983219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.772 [2024-11-19 10:07:05.983271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.772 BaseBdev1 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.772 10:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 BaseBdev2_malloc 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 [2024-11-19 10:07:06.040493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:52.032 [2024-11-19 10:07:06.040583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.032 [2024-11-19 10:07:06.040613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:52.032 [2024-11-19 10:07:06.040635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.032 [2024-11-19 10:07:06.043688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.032 [2024-11-19 10:07:06.043914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:52.032 BaseBdev2 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 spare_malloc 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 spare_delay 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 [2024-11-19 10:07:06.124222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.032 [2024-11-19 10:07:06.124309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.032 [2024-11-19 10:07:06.124355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:52.032 [2024-11-19 10:07:06.124382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.032 [2024-11-19 10:07:06.127424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.032 [2024-11-19 10:07:06.127632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.032 spare 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 [2024-11-19 10:07:06.136420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.032 [2024-11-19 10:07:06.139119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.032 [2024-11-19 10:07:06.139257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:52.032 [2024-11-19 10:07:06.139280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:52.032 [2024-11-19 10:07:06.139625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:52.032 [2024-11-19 10:07:06.139885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:52.032 [2024-11-19 10:07:06.139907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:52.032 [2024-11-19 10:07:06.140107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.032 "name": "raid_bdev1", 00:12:52.032 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:12:52.032 "strip_size_kb": 0, 00:12:52.032 "state": "online", 00:12:52.032 "raid_level": "raid1", 00:12:52.032 "superblock": false, 00:12:52.032 "num_base_bdevs": 2, 00:12:52.032 "num_base_bdevs_discovered": 2, 00:12:52.032 "num_base_bdevs_operational": 2, 00:12:52.032 "base_bdevs_list": [ 00:12:52.032 { 00:12:52.032 "name": "BaseBdev1", 00:12:52.032 "uuid": "be5109b2-42c4-5dcd-9a08-4bbeea012273", 00:12:52.032 "is_configured": true, 00:12:52.032 "data_offset": 0, 00:12:52.032 "data_size": 65536 00:12:52.032 }, 00:12:52.032 { 00:12:52.032 "name": "BaseBdev2", 00:12:52.032 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:12:52.032 "is_configured": true, 00:12:52.032 "data_offset": 0, 00:12:52.032 "data_size": 65536 00:12:52.032 } 00:12:52.032 ] 00:12:52.032 }' 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.032 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.600 [2024-11-19 10:07:06.680959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.600 10:07:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:52.858 [2024-11-19 10:07:07.076747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:52.858 /dev/nbd0 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.118 1+0 records in 00:12:53.118 1+0 records out 00:12:53.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302171 s, 13.6 MB/s 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:53.118 10:07:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:59.683 65536+0 records in 00:12:59.683 65536+0 records out 00:12:59.683 33554432 bytes (34 MB, 32 MiB) copied, 6.68379 s, 5.0 MB/s 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.683 10:07:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.942 [2024-11-19 10:07:14.157709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.942 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.942 [2024-11-19 10:07:14.169841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.201 "name": "raid_bdev1", 00:13:00.201 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:00.201 "strip_size_kb": 0, 00:13:00.201 "state": "online", 00:13:00.201 "raid_level": "raid1", 00:13:00.201 "superblock": false, 00:13:00.201 "num_base_bdevs": 2, 00:13:00.201 "num_base_bdevs_discovered": 1, 00:13:00.201 "num_base_bdevs_operational": 1, 00:13:00.201 "base_bdevs_list": [ 00:13:00.201 { 00:13:00.201 "name": null, 00:13:00.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.201 "is_configured": false, 00:13:00.201 "data_offset": 0, 00:13:00.201 "data_size": 65536 00:13:00.201 }, 00:13:00.201 { 00:13:00.201 "name": "BaseBdev2", 00:13:00.201 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:00.201 "is_configured": true, 00:13:00.201 "data_offset": 0, 00:13:00.201 "data_size": 65536 00:13:00.201 } 00:13:00.201 ] 00:13:00.201 }' 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.201 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.460 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.460 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.460 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.460 [2024-11-19 10:07:14.654013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.460 [2024-11-19 10:07:14.672421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:00.460 10:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.460 10:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.460 [2024-11-19 10:07:14.675269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.839 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.839 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.839 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.840 "name": "raid_bdev1", 00:13:01.840 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:01.840 "strip_size_kb": 0, 00:13:01.840 "state": "online", 00:13:01.840 "raid_level": "raid1", 00:13:01.840 "superblock": false, 00:13:01.840 "num_base_bdevs": 2, 00:13:01.840 "num_base_bdevs_discovered": 2, 00:13:01.840 "num_base_bdevs_operational": 2, 00:13:01.840 "process": { 00:13:01.840 "type": "rebuild", 00:13:01.840 "target": "spare", 00:13:01.840 "progress": { 00:13:01.840 "blocks": 20480, 00:13:01.840 "percent": 31 00:13:01.840 } 00:13:01.840 }, 00:13:01.840 "base_bdevs_list": [ 00:13:01.840 { 00:13:01.840 "name": "spare", 00:13:01.840 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:01.840 "is_configured": true, 00:13:01.840 "data_offset": 0, 00:13:01.840 "data_size": 65536 00:13:01.840 }, 00:13:01.840 { 00:13:01.840 "name": "BaseBdev2", 00:13:01.840 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:01.840 "is_configured": true, 00:13:01.840 "data_offset": 0, 00:13:01.840 "data_size": 65536 00:13:01.840 } 00:13:01.840 ] 00:13:01.840 }' 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.840 [2024-11-19 10:07:15.837751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.840 [2024-11-19 10:07:15.887328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.840 [2024-11-19 10:07:15.887566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.840 [2024-11-19 10:07:15.887596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.840 [2024-11-19 10:07:15.887613] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.840 "name": "raid_bdev1", 00:13:01.840 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:01.840 "strip_size_kb": 0, 00:13:01.840 "state": "online", 00:13:01.840 "raid_level": "raid1", 00:13:01.840 "superblock": false, 00:13:01.840 "num_base_bdevs": 2, 00:13:01.840 "num_base_bdevs_discovered": 1, 00:13:01.840 "num_base_bdevs_operational": 1, 00:13:01.840 "base_bdevs_list": [ 00:13:01.840 { 00:13:01.840 "name": null, 00:13:01.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.840 "is_configured": false, 00:13:01.840 "data_offset": 0, 00:13:01.840 "data_size": 65536 00:13:01.840 }, 00:13:01.840 { 00:13:01.840 "name": "BaseBdev2", 00:13:01.840 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:01.840 "is_configured": true, 00:13:01.840 "data_offset": 0, 00:13:01.840 "data_size": 65536 00:13:01.840 } 00:13:01.840 ] 00:13:01.840 }' 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.840 10:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.409 "name": "raid_bdev1", 00:13:02.409 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:02.409 "strip_size_kb": 0, 00:13:02.409 "state": "online", 00:13:02.409 "raid_level": "raid1", 00:13:02.409 "superblock": false, 00:13:02.409 "num_base_bdevs": 2, 00:13:02.410 "num_base_bdevs_discovered": 1, 00:13:02.410 "num_base_bdevs_operational": 1, 00:13:02.410 "base_bdevs_list": [ 00:13:02.410 { 00:13:02.410 "name": null, 00:13:02.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.410 "is_configured": false, 00:13:02.410 "data_offset": 0, 00:13:02.410 "data_size": 65536 00:13:02.410 }, 00:13:02.410 { 00:13:02.410 "name": "BaseBdev2", 00:13:02.410 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:02.410 "is_configured": true, 00:13:02.410 "data_offset": 0, 00:13:02.410 "data_size": 65536 00:13:02.410 } 00:13:02.410 ] 00:13:02.410 }' 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 [2024-11-19 10:07:16.553516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.410 [2024-11-19 10:07:16.571432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.410 10:07:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:02.410 [2024-11-19 10:07:16.574372] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.348 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.348 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.348 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.348 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.348 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.642 "name": "raid_bdev1", 00:13:03.642 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:03.642 "strip_size_kb": 0, 00:13:03.642 "state": "online", 00:13:03.642 "raid_level": "raid1", 00:13:03.642 "superblock": false, 00:13:03.642 "num_base_bdevs": 2, 00:13:03.642 "num_base_bdevs_discovered": 2, 00:13:03.642 "num_base_bdevs_operational": 2, 00:13:03.642 "process": { 00:13:03.642 "type": "rebuild", 00:13:03.642 "target": "spare", 00:13:03.642 "progress": { 00:13:03.642 "blocks": 20480, 00:13:03.642 "percent": 31 00:13:03.642 } 00:13:03.642 }, 00:13:03.642 "base_bdevs_list": [ 00:13:03.642 { 00:13:03.642 "name": "spare", 00:13:03.642 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:03.642 "is_configured": true, 00:13:03.642 "data_offset": 0, 00:13:03.642 "data_size": 65536 00:13:03.642 }, 00:13:03.642 { 00:13:03.642 "name": "BaseBdev2", 00:13:03.642 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:03.642 "is_configured": true, 00:13:03.642 "data_offset": 0, 00:13:03.642 "data_size": 65536 00:13:03.642 } 00:13:03.642 ] 00:13:03.642 }' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.642 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.642 "name": "raid_bdev1", 00:13:03.642 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:03.642 "strip_size_kb": 0, 00:13:03.642 "state": "online", 00:13:03.642 "raid_level": "raid1", 00:13:03.642 "superblock": false, 00:13:03.642 "num_base_bdevs": 2, 00:13:03.642 "num_base_bdevs_discovered": 2, 00:13:03.642 "num_base_bdevs_operational": 2, 00:13:03.642 "process": { 00:13:03.642 "type": "rebuild", 00:13:03.642 "target": "spare", 00:13:03.642 "progress": { 00:13:03.642 "blocks": 22528, 00:13:03.642 "percent": 34 00:13:03.642 } 00:13:03.642 }, 00:13:03.643 "base_bdevs_list": [ 00:13:03.643 { 00:13:03.643 "name": "spare", 00:13:03.643 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:03.643 "is_configured": true, 00:13:03.643 "data_offset": 0, 00:13:03.643 "data_size": 65536 00:13:03.643 }, 00:13:03.643 { 00:13:03.643 "name": "BaseBdev2", 00:13:03.643 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:03.643 "is_configured": true, 00:13:03.643 "data_offset": 0, 00:13:03.643 "data_size": 65536 00:13:03.643 } 00:13:03.643 ] 00:13:03.643 }' 00:13:03.643 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.643 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.643 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.901 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.901 10:07:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.838 "name": "raid_bdev1", 00:13:04.838 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:04.838 "strip_size_kb": 0, 00:13:04.838 "state": "online", 00:13:04.838 "raid_level": "raid1", 00:13:04.838 "superblock": false, 00:13:04.838 "num_base_bdevs": 2, 00:13:04.838 "num_base_bdevs_discovered": 2, 00:13:04.838 "num_base_bdevs_operational": 2, 00:13:04.838 "process": { 00:13:04.838 "type": "rebuild", 00:13:04.838 "target": "spare", 00:13:04.838 "progress": { 00:13:04.838 "blocks": 47104, 00:13:04.838 "percent": 71 00:13:04.838 } 00:13:04.838 }, 00:13:04.838 "base_bdevs_list": [ 00:13:04.838 { 00:13:04.838 "name": "spare", 00:13:04.838 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:04.838 "is_configured": true, 00:13:04.838 "data_offset": 0, 00:13:04.838 "data_size": 65536 00:13:04.838 }, 00:13:04.838 { 00:13:04.838 "name": "BaseBdev2", 00:13:04.838 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:04.838 "is_configured": true, 00:13:04.838 "data_offset": 0, 00:13:04.838 "data_size": 65536 00:13:04.838 } 00:13:04.838 ] 00:13:04.838 }' 00:13:04.838 10:07:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.838 10:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.838 10:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.838 10:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.838 10:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.772 [2024-11-19 10:07:19.804727] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:05.772 [2024-11-19 10:07:19.804882] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:05.772 [2024-11-19 10:07:19.804956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.032 "name": "raid_bdev1", 00:13:06.032 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:06.032 "strip_size_kb": 0, 00:13:06.032 "state": "online", 00:13:06.032 "raid_level": "raid1", 00:13:06.032 "superblock": false, 00:13:06.032 "num_base_bdevs": 2, 00:13:06.032 "num_base_bdevs_discovered": 2, 00:13:06.032 "num_base_bdevs_operational": 2, 00:13:06.032 "base_bdevs_list": [ 00:13:06.032 { 00:13:06.032 "name": "spare", 00:13:06.032 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:06.032 "is_configured": true, 00:13:06.032 "data_offset": 0, 00:13:06.032 "data_size": 65536 00:13:06.032 }, 00:13:06.032 { 00:13:06.032 "name": "BaseBdev2", 00:13:06.032 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:06.032 "is_configured": true, 00:13:06.032 "data_offset": 0, 00:13:06.032 "data_size": 65536 00:13:06.032 } 00:13:06.032 ] 00:13:06.032 }' 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.032 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.291 "name": "raid_bdev1", 00:13:06.291 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:06.291 "strip_size_kb": 0, 00:13:06.291 "state": "online", 00:13:06.291 "raid_level": "raid1", 00:13:06.291 "superblock": false, 00:13:06.291 "num_base_bdevs": 2, 00:13:06.291 "num_base_bdevs_discovered": 2, 00:13:06.291 "num_base_bdevs_operational": 2, 00:13:06.291 "base_bdevs_list": [ 00:13:06.291 { 00:13:06.291 "name": "spare", 00:13:06.291 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:06.291 "is_configured": true, 00:13:06.291 "data_offset": 0, 00:13:06.291 "data_size": 65536 00:13:06.291 }, 00:13:06.291 { 00:13:06.291 "name": "BaseBdev2", 00:13:06.291 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:06.291 "is_configured": true, 00:13:06.291 "data_offset": 0, 00:13:06.291 "data_size": 65536 00:13:06.291 } 00:13:06.291 ] 00:13:06.291 }' 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.291 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.292 "name": "raid_bdev1", 00:13:06.292 "uuid": "93c868cd-b2bc-4fdf-8b24-a7a8e20d0493", 00:13:06.292 "strip_size_kb": 0, 00:13:06.292 "state": "online", 00:13:06.292 "raid_level": "raid1", 00:13:06.292 "superblock": false, 00:13:06.292 "num_base_bdevs": 2, 00:13:06.292 "num_base_bdevs_discovered": 2, 00:13:06.292 "num_base_bdevs_operational": 2, 00:13:06.292 "base_bdevs_list": [ 00:13:06.292 { 00:13:06.292 "name": "spare", 00:13:06.292 "uuid": "22d8f59d-af57-5d86-b799-16b8d9aba1f4", 00:13:06.292 "is_configured": true, 00:13:06.292 "data_offset": 0, 00:13:06.292 "data_size": 65536 00:13:06.292 }, 00:13:06.292 { 00:13:06.292 "name": "BaseBdev2", 00:13:06.292 "uuid": "1a2fa8e1-526c-529b-bf40-c3c16ee24b0d", 00:13:06.292 "is_configured": true, 00:13:06.292 "data_offset": 0, 00:13:06.292 "data_size": 65536 00:13:06.292 } 00:13:06.292 ] 00:13:06.292 }' 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.292 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.860 [2024-11-19 10:07:20.882631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.860 [2024-11-19 10:07:20.882675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.860 [2024-11-19 10:07:20.882807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.860 [2024-11-19 10:07:20.882914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.860 [2024-11-19 10:07:20.882936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.860 10:07:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:07.118 /dev/nbd0 00:13:07.118 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.118 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.118 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:07.118 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:07.118 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.118 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.119 1+0 records in 00:13:07.119 1+0 records out 00:13:07.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382846 s, 10.7 MB/s 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.119 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:07.746 /dev/nbd1 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.746 1+0 records in 00:13:07.746 1+0 records out 00:13:07.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352302 s, 11.6 MB/s 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.746 10:07:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.020 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75377 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75377 ']' 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75377 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75377 00:13:08.279 killing process with pid 75377 00:13:08.279 Received shutdown signal, test time was about 60.000000 seconds 00:13:08.279 00:13:08.279 Latency(us) 00:13:08.279 [2024-11-19T10:07:22.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.279 [2024-11-19T10:07:22.511Z] =================================================================================================================== 00:13:08.279 [2024-11-19T10:07:22.511Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75377' 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75377 00:13:08.279 [2024-11-19 10:07:22.457304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.279 10:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75377 00:13:08.538 [2024-11-19 10:07:22.754354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.916 ************************************ 00:13:09.916 END TEST raid_rebuild_test 00:13:09.916 ************************************ 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:09.916 00:13:09.916 real 0m19.201s 00:13:09.916 user 0m21.867s 00:13:09.916 sys 0m3.784s 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 10:07:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:09.916 10:07:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:09.916 10:07:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.916 10:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 ************************************ 00:13:09.916 START TEST raid_rebuild_test_sb 00:13:09.916 ************************************ 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:09.916 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75834 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75834 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75834 ']' 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.917 10:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.917 [2024-11-19 10:07:24.056598] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:13:09.917 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.917 Zero copy mechanism will not be used. 00:13:09.917 [2024-11-19 10:07:24.056808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75834 ] 00:13:10.175 [2024-11-19 10:07:24.239033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.175 [2024-11-19 10:07:24.390022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.433 [2024-11-19 10:07:24.618205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.433 [2024-11-19 10:07:24.618273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 BaseBdev1_malloc 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 [2024-11-19 10:07:25.187956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.001 [2024-11-19 10:07:25.188108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.001 [2024-11-19 10:07:25.188152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.001 [2024-11-19 10:07:25.188174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.001 [2024-11-19 10:07:25.191533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.001 [2024-11-19 10:07:25.191637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.001 BaseBdev1 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.001 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.260 BaseBdev2_malloc 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.260 [2024-11-19 10:07:25.249481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:11.260 [2024-11-19 10:07:25.249624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.260 [2024-11-19 10:07:25.249664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.260 [2024-11-19 10:07:25.249690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.260 [2024-11-19 10:07:25.253054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.260 [2024-11-19 10:07:25.253152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.260 BaseBdev2 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.260 spare_malloc 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.260 spare_delay 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.260 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.261 [2024-11-19 10:07:25.335300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.261 [2024-11-19 10:07:25.335485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.261 [2024-11-19 10:07:25.335548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:11.261 [2024-11-19 10:07:25.335579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.261 [2024-11-19 10:07:25.340007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.261 [2024-11-19 10:07:25.340140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.261 spare 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.261 [2024-11-19 10:07:25.348698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.261 [2024-11-19 10:07:25.352369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.261 [2024-11-19 10:07:25.352923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:11.261 [2024-11-19 10:07:25.352972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.261 [2024-11-19 10:07:25.353534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.261 [2024-11-19 10:07:25.353930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:11.261 [2024-11-19 10:07:25.353967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:11.261 [2024-11-19 10:07:25.354326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.261 "name": "raid_bdev1", 00:13:11.261 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:11.261 "strip_size_kb": 0, 00:13:11.261 "state": "online", 00:13:11.261 "raid_level": "raid1", 00:13:11.261 "superblock": true, 00:13:11.261 "num_base_bdevs": 2, 00:13:11.261 "num_base_bdevs_discovered": 2, 00:13:11.261 "num_base_bdevs_operational": 2, 00:13:11.261 "base_bdevs_list": [ 00:13:11.261 { 00:13:11.261 "name": "BaseBdev1", 00:13:11.261 "uuid": "08e6bb94-867d-521a-bf0f-7238a9e69f0b", 00:13:11.261 "is_configured": true, 00:13:11.261 "data_offset": 2048, 00:13:11.261 "data_size": 63488 00:13:11.261 }, 00:13:11.261 { 00:13:11.261 "name": "BaseBdev2", 00:13:11.261 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:11.261 "is_configured": true, 00:13:11.261 "data_offset": 2048, 00:13:11.261 "data_size": 63488 00:13:11.261 } 00:13:11.261 ] 00:13:11.261 }' 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.261 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.832 [2024-11-19 10:07:25.909375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.832 10:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.832 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:12.095 [2024-11-19 10:07:26.289222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:12.095 /dev/nbd0 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.354 1+0 records in 00:13:12.354 1+0 records out 00:13:12.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458534 s, 8.9 MB/s 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:12.354 10:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:18.914 63488+0 records in 00:13:18.914 63488+0 records out 00:13:18.914 32505856 bytes (33 MB, 31 MiB) copied, 6.5994 s, 4.9 MB/s 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.914 10:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:19.173 [2024-11-19 10:07:33.307739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.173 [2024-11-19 10:07:33.331969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.173 "name": "raid_bdev1", 00:13:19.173 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:19.173 "strip_size_kb": 0, 00:13:19.173 "state": "online", 00:13:19.173 "raid_level": "raid1", 00:13:19.173 "superblock": true, 00:13:19.173 "num_base_bdevs": 2, 00:13:19.173 "num_base_bdevs_discovered": 1, 00:13:19.173 "num_base_bdevs_operational": 1, 00:13:19.173 "base_bdevs_list": [ 00:13:19.173 { 00:13:19.173 "name": null, 00:13:19.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.173 "is_configured": false, 00:13:19.173 "data_offset": 0, 00:13:19.173 "data_size": 63488 00:13:19.173 }, 00:13:19.173 { 00:13:19.173 "name": "BaseBdev2", 00:13:19.173 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:19.173 "is_configured": true, 00:13:19.173 "data_offset": 2048, 00:13:19.173 "data_size": 63488 00:13:19.173 } 00:13:19.173 ] 00:13:19.173 }' 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.173 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.741 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.741 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.741 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.741 [2024-11-19 10:07:33.880132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.741 [2024-11-19 10:07:33.898930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:19.741 10:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.741 10:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:19.741 [2024-11-19 10:07:33.902087] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.680 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.680 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.680 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.680 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.680 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.939 "name": "raid_bdev1", 00:13:20.939 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:20.939 "strip_size_kb": 0, 00:13:20.939 "state": "online", 00:13:20.939 "raid_level": "raid1", 00:13:20.939 "superblock": true, 00:13:20.939 "num_base_bdevs": 2, 00:13:20.939 "num_base_bdevs_discovered": 2, 00:13:20.939 "num_base_bdevs_operational": 2, 00:13:20.939 "process": { 00:13:20.939 "type": "rebuild", 00:13:20.939 "target": "spare", 00:13:20.939 "progress": { 00:13:20.939 "blocks": 20480, 00:13:20.939 "percent": 32 00:13:20.939 } 00:13:20.939 }, 00:13:20.939 "base_bdevs_list": [ 00:13:20.939 { 00:13:20.939 "name": "spare", 00:13:20.939 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:20.939 "is_configured": true, 00:13:20.939 "data_offset": 2048, 00:13:20.939 "data_size": 63488 00:13:20.939 }, 00:13:20.939 { 00:13:20.939 "name": "BaseBdev2", 00:13:20.939 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:20.939 "is_configured": true, 00:13:20.939 "data_offset": 2048, 00:13:20.939 "data_size": 63488 00:13:20.939 } 00:13:20.939 ] 00:13:20.939 }' 00:13:20.939 10:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.939 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.940 [2024-11-19 10:07:35.072440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.940 [2024-11-19 10:07:35.114471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.940 [2024-11-19 10:07:35.114968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.940 [2024-11-19 10:07:35.115107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.940 [2024-11-19 10:07:35.115167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.940 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.199 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.199 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.199 "name": "raid_bdev1", 00:13:21.199 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:21.199 "strip_size_kb": 0, 00:13:21.199 "state": "online", 00:13:21.199 "raid_level": "raid1", 00:13:21.199 "superblock": true, 00:13:21.199 "num_base_bdevs": 2, 00:13:21.199 "num_base_bdevs_discovered": 1, 00:13:21.199 "num_base_bdevs_operational": 1, 00:13:21.199 "base_bdevs_list": [ 00:13:21.199 { 00:13:21.199 "name": null, 00:13:21.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.199 "is_configured": false, 00:13:21.199 "data_offset": 0, 00:13:21.199 "data_size": 63488 00:13:21.199 }, 00:13:21.199 { 00:13:21.199 "name": "BaseBdev2", 00:13:21.199 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:21.199 "is_configured": true, 00:13:21.199 "data_offset": 2048, 00:13:21.199 "data_size": 63488 00:13:21.199 } 00:13:21.199 ] 00:13:21.199 }' 00:13:21.199 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.199 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.767 "name": "raid_bdev1", 00:13:21.767 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:21.767 "strip_size_kb": 0, 00:13:21.767 "state": "online", 00:13:21.767 "raid_level": "raid1", 00:13:21.767 "superblock": true, 00:13:21.767 "num_base_bdevs": 2, 00:13:21.767 "num_base_bdevs_discovered": 1, 00:13:21.767 "num_base_bdevs_operational": 1, 00:13:21.767 "base_bdevs_list": [ 00:13:21.767 { 00:13:21.767 "name": null, 00:13:21.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.767 "is_configured": false, 00:13:21.767 "data_offset": 0, 00:13:21.767 "data_size": 63488 00:13:21.767 }, 00:13:21.767 { 00:13:21.767 "name": "BaseBdev2", 00:13:21.767 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:21.767 "is_configured": true, 00:13:21.767 "data_offset": 2048, 00:13:21.767 "data_size": 63488 00:13:21.767 } 00:13:21.767 ] 00:13:21.767 }' 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.767 [2024-11-19 10:07:35.874983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.767 [2024-11-19 10:07:35.892188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.767 10:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:21.767 [2024-11-19 10:07:35.895056] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.703 10:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.962 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.962 "name": "raid_bdev1", 00:13:22.962 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:22.962 "strip_size_kb": 0, 00:13:22.962 "state": "online", 00:13:22.962 "raid_level": "raid1", 00:13:22.962 "superblock": true, 00:13:22.962 "num_base_bdevs": 2, 00:13:22.962 "num_base_bdevs_discovered": 2, 00:13:22.962 "num_base_bdevs_operational": 2, 00:13:22.962 "process": { 00:13:22.962 "type": "rebuild", 00:13:22.962 "target": "spare", 00:13:22.962 "progress": { 00:13:22.962 "blocks": 20480, 00:13:22.962 "percent": 32 00:13:22.962 } 00:13:22.962 }, 00:13:22.962 "base_bdevs_list": [ 00:13:22.962 { 00:13:22.962 "name": "spare", 00:13:22.962 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:22.962 "is_configured": true, 00:13:22.962 "data_offset": 2048, 00:13:22.962 "data_size": 63488 00:13:22.962 }, 00:13:22.962 { 00:13:22.962 "name": "BaseBdev2", 00:13:22.962 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:22.962 "is_configured": true, 00:13:22.962 "data_offset": 2048, 00:13:22.962 "data_size": 63488 00:13:22.962 } 00:13:22.962 ] 00:13:22.962 }' 00:13:22.962 10:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:22.962 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.962 "name": "raid_bdev1", 00:13:22.962 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:22.962 "strip_size_kb": 0, 00:13:22.962 "state": "online", 00:13:22.962 "raid_level": "raid1", 00:13:22.962 "superblock": true, 00:13:22.962 "num_base_bdevs": 2, 00:13:22.962 "num_base_bdevs_discovered": 2, 00:13:22.962 "num_base_bdevs_operational": 2, 00:13:22.962 "process": { 00:13:22.962 "type": "rebuild", 00:13:22.962 "target": "spare", 00:13:22.962 "progress": { 00:13:22.962 "blocks": 22528, 00:13:22.962 "percent": 35 00:13:22.962 } 00:13:22.962 }, 00:13:22.962 "base_bdevs_list": [ 00:13:22.962 { 00:13:22.962 "name": "spare", 00:13:22.962 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:22.962 "is_configured": true, 00:13:22.962 "data_offset": 2048, 00:13:22.962 "data_size": 63488 00:13:22.962 }, 00:13:22.962 { 00:13:22.962 "name": "BaseBdev2", 00:13:22.962 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:22.962 "is_configured": true, 00:13:22.962 "data_offset": 2048, 00:13:22.962 "data_size": 63488 00:13:22.962 } 00:13:22.962 ] 00:13:22.962 }' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.962 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.222 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.222 10:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.159 "name": "raid_bdev1", 00:13:24.159 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:24.159 "strip_size_kb": 0, 00:13:24.159 "state": "online", 00:13:24.159 "raid_level": "raid1", 00:13:24.159 "superblock": true, 00:13:24.159 "num_base_bdevs": 2, 00:13:24.159 "num_base_bdevs_discovered": 2, 00:13:24.159 "num_base_bdevs_operational": 2, 00:13:24.159 "process": { 00:13:24.159 "type": "rebuild", 00:13:24.159 "target": "spare", 00:13:24.159 "progress": { 00:13:24.159 "blocks": 47104, 00:13:24.159 "percent": 74 00:13:24.159 } 00:13:24.159 }, 00:13:24.159 "base_bdevs_list": [ 00:13:24.159 { 00:13:24.159 "name": "spare", 00:13:24.159 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:24.159 "is_configured": true, 00:13:24.159 "data_offset": 2048, 00:13:24.159 "data_size": 63488 00:13:24.159 }, 00:13:24.159 { 00:13:24.159 "name": "BaseBdev2", 00:13:24.159 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:24.159 "is_configured": true, 00:13:24.159 "data_offset": 2048, 00:13:24.159 "data_size": 63488 00:13:24.159 } 00:13:24.159 ] 00:13:24.159 }' 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.159 10:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.096 [2024-11-19 10:07:39.025694] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:25.096 [2024-11-19 10:07:39.026204] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:25.096 [2024-11-19 10:07:39.026425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.355 "name": "raid_bdev1", 00:13:25.355 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:25.355 "strip_size_kb": 0, 00:13:25.355 "state": "online", 00:13:25.355 "raid_level": "raid1", 00:13:25.355 "superblock": true, 00:13:25.355 "num_base_bdevs": 2, 00:13:25.355 "num_base_bdevs_discovered": 2, 00:13:25.355 "num_base_bdevs_operational": 2, 00:13:25.355 "base_bdevs_list": [ 00:13:25.355 { 00:13:25.355 "name": "spare", 00:13:25.355 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:25.355 "is_configured": true, 00:13:25.355 "data_offset": 2048, 00:13:25.355 "data_size": 63488 00:13:25.355 }, 00:13:25.355 { 00:13:25.355 "name": "BaseBdev2", 00:13:25.355 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:25.355 "is_configured": true, 00:13:25.355 "data_offset": 2048, 00:13:25.355 "data_size": 63488 00:13:25.355 } 00:13:25.355 ] 00:13:25.355 }' 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.355 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.613 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.613 "name": "raid_bdev1", 00:13:25.613 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:25.613 "strip_size_kb": 0, 00:13:25.613 "state": "online", 00:13:25.613 "raid_level": "raid1", 00:13:25.613 "superblock": true, 00:13:25.614 "num_base_bdevs": 2, 00:13:25.614 "num_base_bdevs_discovered": 2, 00:13:25.614 "num_base_bdevs_operational": 2, 00:13:25.614 "base_bdevs_list": [ 00:13:25.614 { 00:13:25.614 "name": "spare", 00:13:25.614 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:25.614 "is_configured": true, 00:13:25.614 "data_offset": 2048, 00:13:25.614 "data_size": 63488 00:13:25.614 }, 00:13:25.614 { 00:13:25.614 "name": "BaseBdev2", 00:13:25.614 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:25.614 "is_configured": true, 00:13:25.614 "data_offset": 2048, 00:13:25.614 "data_size": 63488 00:13:25.614 } 00:13:25.614 ] 00:13:25.614 }' 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.614 "name": "raid_bdev1", 00:13:25.614 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:25.614 "strip_size_kb": 0, 00:13:25.614 "state": "online", 00:13:25.614 "raid_level": "raid1", 00:13:25.614 "superblock": true, 00:13:25.614 "num_base_bdevs": 2, 00:13:25.614 "num_base_bdevs_discovered": 2, 00:13:25.614 "num_base_bdevs_operational": 2, 00:13:25.614 "base_bdevs_list": [ 00:13:25.614 { 00:13:25.614 "name": "spare", 00:13:25.614 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:25.614 "is_configured": true, 00:13:25.614 "data_offset": 2048, 00:13:25.614 "data_size": 63488 00:13:25.614 }, 00:13:25.614 { 00:13:25.614 "name": "BaseBdev2", 00:13:25.614 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:25.614 "is_configured": true, 00:13:25.614 "data_offset": 2048, 00:13:25.614 "data_size": 63488 00:13:25.614 } 00:13:25.614 ] 00:13:25.614 }' 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.614 10:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.269 [2024-11-19 10:07:40.262182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.269 [2024-11-19 10:07:40.262253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.269 [2024-11-19 10:07:40.262387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.269 [2024-11-19 10:07:40.262500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.269 [2024-11-19 10:07:40.262519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.269 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:26.527 /dev/nbd0 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.527 1+0 records in 00:13:26.527 1+0 records out 00:13:26.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458355 s, 8.9 MB/s 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.527 10:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:27.095 /dev/nbd1 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.095 1+0 records in 00:13:27.095 1+0 records out 00:13:27.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654419 s, 6.3 MB/s 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.095 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:27.353 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:27.353 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.353 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.354 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.354 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:27.354 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.354 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.613 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.872 [2024-11-19 10:07:41.974102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.872 [2024-11-19 10:07:41.974205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.872 [2024-11-19 10:07:41.974245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:27.872 [2024-11-19 10:07:41.974261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.872 [2024-11-19 10:07:41.977624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.872 [2024-11-19 10:07:41.977712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.872 [2024-11-19 10:07:41.977893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:27.872 [2024-11-19 10:07:41.977972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.872 [2024-11-19 10:07:41.978195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.872 spare 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.872 10:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.872 [2024-11-19 10:07:42.078435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:27.872 [2024-11-19 10:07:42.078538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:27.872 [2024-11-19 10:07:42.079079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:27.872 [2024-11-19 10:07:42.079411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:27.872 [2024-11-19 10:07:42.079433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:27.872 [2024-11-19 10:07:42.079721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.872 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.872 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.872 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.872 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.872 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.872 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.873 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.131 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.131 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.131 "name": "raid_bdev1", 00:13:28.131 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:28.131 "strip_size_kb": 0, 00:13:28.131 "state": "online", 00:13:28.131 "raid_level": "raid1", 00:13:28.131 "superblock": true, 00:13:28.131 "num_base_bdevs": 2, 00:13:28.131 "num_base_bdevs_discovered": 2, 00:13:28.131 "num_base_bdevs_operational": 2, 00:13:28.131 "base_bdevs_list": [ 00:13:28.131 { 00:13:28.131 "name": "spare", 00:13:28.131 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:28.131 "is_configured": true, 00:13:28.131 "data_offset": 2048, 00:13:28.131 "data_size": 63488 00:13:28.131 }, 00:13:28.131 { 00:13:28.131 "name": "BaseBdev2", 00:13:28.131 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:28.131 "is_configured": true, 00:13:28.131 "data_offset": 2048, 00:13:28.131 "data_size": 63488 00:13:28.131 } 00:13:28.131 ] 00:13:28.131 }' 00:13:28.131 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.131 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.389 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.648 "name": "raid_bdev1", 00:13:28.648 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:28.648 "strip_size_kb": 0, 00:13:28.648 "state": "online", 00:13:28.648 "raid_level": "raid1", 00:13:28.648 "superblock": true, 00:13:28.648 "num_base_bdevs": 2, 00:13:28.648 "num_base_bdevs_discovered": 2, 00:13:28.648 "num_base_bdevs_operational": 2, 00:13:28.648 "base_bdevs_list": [ 00:13:28.648 { 00:13:28.648 "name": "spare", 00:13:28.648 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:28.648 "is_configured": true, 00:13:28.648 "data_offset": 2048, 00:13:28.648 "data_size": 63488 00:13:28.648 }, 00:13:28.648 { 00:13:28.648 "name": "BaseBdev2", 00:13:28.648 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:28.648 "is_configured": true, 00:13:28.648 "data_offset": 2048, 00:13:28.648 "data_size": 63488 00:13:28.648 } 00:13:28.648 ] 00:13:28.648 }' 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.648 [2024-11-19 10:07:42.790660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.648 "name": "raid_bdev1", 00:13:28.648 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:28.648 "strip_size_kb": 0, 00:13:28.648 "state": "online", 00:13:28.648 "raid_level": "raid1", 00:13:28.648 "superblock": true, 00:13:28.648 "num_base_bdevs": 2, 00:13:28.648 "num_base_bdevs_discovered": 1, 00:13:28.648 "num_base_bdevs_operational": 1, 00:13:28.648 "base_bdevs_list": [ 00:13:28.648 { 00:13:28.648 "name": null, 00:13:28.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.648 "is_configured": false, 00:13:28.648 "data_offset": 0, 00:13:28.648 "data_size": 63488 00:13:28.648 }, 00:13:28.648 { 00:13:28.648 "name": "BaseBdev2", 00:13:28.648 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:28.648 "is_configured": true, 00:13:28.648 "data_offset": 2048, 00:13:28.648 "data_size": 63488 00:13:28.648 } 00:13:28.648 ] 00:13:28.648 }' 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.648 10:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.215 10:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.215 10:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.215 10:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.215 [2024-11-19 10:07:43.334916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.215 [2024-11-19 10:07:43.335262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:29.215 [2024-11-19 10:07:43.335291] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:29.215 [2024-11-19 10:07:43.335351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.215 [2024-11-19 10:07:43.352314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:29.215 10:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.215 10:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:29.215 [2024-11-19 10:07:43.355172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.149 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.444 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.444 "name": "raid_bdev1", 00:13:30.444 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:30.444 "strip_size_kb": 0, 00:13:30.444 "state": "online", 00:13:30.444 "raid_level": "raid1", 00:13:30.444 "superblock": true, 00:13:30.444 "num_base_bdevs": 2, 00:13:30.444 "num_base_bdevs_discovered": 2, 00:13:30.444 "num_base_bdevs_operational": 2, 00:13:30.444 "process": { 00:13:30.444 "type": "rebuild", 00:13:30.444 "target": "spare", 00:13:30.444 "progress": { 00:13:30.444 "blocks": 20480, 00:13:30.444 "percent": 32 00:13:30.444 } 00:13:30.444 }, 00:13:30.444 "base_bdevs_list": [ 00:13:30.444 { 00:13:30.444 "name": "spare", 00:13:30.444 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.445 "data_size": 63488 00:13:30.445 }, 00:13:30.445 { 00:13:30.445 "name": "BaseBdev2", 00:13:30.445 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:30.445 "is_configured": true, 00:13:30.445 "data_offset": 2048, 00:13:30.445 "data_size": 63488 00:13:30.445 } 00:13:30.445 ] 00:13:30.445 }' 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.445 [2024-11-19 10:07:44.533283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.445 [2024-11-19 10:07:44.566972] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:30.445 [2024-11-19 10:07:44.567126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.445 [2024-11-19 10:07:44.567154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.445 [2024-11-19 10:07:44.567170] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.445 "name": "raid_bdev1", 00:13:30.445 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:30.445 "strip_size_kb": 0, 00:13:30.445 "state": "online", 00:13:30.445 "raid_level": "raid1", 00:13:30.445 "superblock": true, 00:13:30.445 "num_base_bdevs": 2, 00:13:30.445 "num_base_bdevs_discovered": 1, 00:13:30.445 "num_base_bdevs_operational": 1, 00:13:30.445 "base_bdevs_list": [ 00:13:30.445 { 00:13:30.445 "name": null, 00:13:30.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.445 "is_configured": false, 00:13:30.445 "data_offset": 0, 00:13:30.445 "data_size": 63488 00:13:30.445 }, 00:13:30.445 { 00:13:30.445 "name": "BaseBdev2", 00:13:30.445 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:30.445 "is_configured": true, 00:13:30.445 "data_offset": 2048, 00:13:30.445 "data_size": 63488 00:13:30.445 } 00:13:30.445 ] 00:13:30.445 }' 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.445 10:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.013 10:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.013 10:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.013 10:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.013 [2024-11-19 10:07:45.134523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.013 [2024-11-19 10:07:45.134659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.013 [2024-11-19 10:07:45.134698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:31.013 [2024-11-19 10:07:45.134718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.013 [2024-11-19 10:07:45.135448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.013 [2024-11-19 10:07:45.135498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.013 [2024-11-19 10:07:45.135651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:31.013 [2024-11-19 10:07:45.135679] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.013 [2024-11-19 10:07:45.135695] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:31.013 [2024-11-19 10:07:45.135737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.013 [2024-11-19 10:07:45.153039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:31.013 spare 00:13:31.013 10:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.013 10:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:31.013 [2024-11-19 10:07:45.155902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.946 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.204 "name": "raid_bdev1", 00:13:32.204 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:32.204 "strip_size_kb": 0, 00:13:32.204 "state": "online", 00:13:32.204 "raid_level": "raid1", 00:13:32.204 "superblock": true, 00:13:32.204 "num_base_bdevs": 2, 00:13:32.204 "num_base_bdevs_discovered": 2, 00:13:32.204 "num_base_bdevs_operational": 2, 00:13:32.204 "process": { 00:13:32.204 "type": "rebuild", 00:13:32.204 "target": "spare", 00:13:32.204 "progress": { 00:13:32.204 "blocks": 20480, 00:13:32.204 "percent": 32 00:13:32.204 } 00:13:32.204 }, 00:13:32.204 "base_bdevs_list": [ 00:13:32.204 { 00:13:32.204 "name": "spare", 00:13:32.204 "uuid": "d0e44bca-de02-5810-b507-c286ce9ce7cf", 00:13:32.204 "is_configured": true, 00:13:32.204 "data_offset": 2048, 00:13:32.204 "data_size": 63488 00:13:32.204 }, 00:13:32.204 { 00:13:32.204 "name": "BaseBdev2", 00:13:32.204 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:32.204 "is_configured": true, 00:13:32.204 "data_offset": 2048, 00:13:32.204 "data_size": 63488 00:13:32.204 } 00:13:32.204 ] 00:13:32.204 }' 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.204 [2024-11-19 10:07:46.362016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.204 [2024-11-19 10:07:46.367639] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.204 [2024-11-19 10:07:46.367767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.204 [2024-11-19 10:07:46.367813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.204 [2024-11-19 10:07:46.367827] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.204 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.462 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.462 "name": "raid_bdev1", 00:13:32.462 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:32.462 "strip_size_kb": 0, 00:13:32.462 "state": "online", 00:13:32.462 "raid_level": "raid1", 00:13:32.462 "superblock": true, 00:13:32.462 "num_base_bdevs": 2, 00:13:32.462 "num_base_bdevs_discovered": 1, 00:13:32.462 "num_base_bdevs_operational": 1, 00:13:32.462 "base_bdevs_list": [ 00:13:32.462 { 00:13:32.462 "name": null, 00:13:32.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.462 "is_configured": false, 00:13:32.462 "data_offset": 0, 00:13:32.462 "data_size": 63488 00:13:32.462 }, 00:13:32.462 { 00:13:32.462 "name": "BaseBdev2", 00:13:32.462 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:32.462 "is_configured": true, 00:13:32.462 "data_offset": 2048, 00:13:32.462 "data_size": 63488 00:13:32.462 } 00:13:32.462 ] 00:13:32.462 }' 00:13:32.462 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.462 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.721 10:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.981 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.981 "name": "raid_bdev1", 00:13:32.981 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:32.981 "strip_size_kb": 0, 00:13:32.981 "state": "online", 00:13:32.981 "raid_level": "raid1", 00:13:32.981 "superblock": true, 00:13:32.981 "num_base_bdevs": 2, 00:13:32.981 "num_base_bdevs_discovered": 1, 00:13:32.981 "num_base_bdevs_operational": 1, 00:13:32.981 "base_bdevs_list": [ 00:13:32.981 { 00:13:32.981 "name": null, 00:13:32.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.981 "is_configured": false, 00:13:32.981 "data_offset": 0, 00:13:32.981 "data_size": 63488 00:13:32.981 }, 00:13:32.981 { 00:13:32.981 "name": "BaseBdev2", 00:13:32.981 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:32.981 "is_configured": true, 00:13:32.981 "data_offset": 2048, 00:13:32.981 "data_size": 63488 00:13:32.981 } 00:13:32.981 ] 00:13:32.981 }' 00:13:32.981 10:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.981 [2024-11-19 10:07:47.103180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.981 [2024-11-19 10:07:47.103303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.981 [2024-11-19 10:07:47.103344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:32.981 [2024-11-19 10:07:47.103374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.981 [2024-11-19 10:07:47.104084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.981 [2024-11-19 10:07:47.104113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.981 [2024-11-19 10:07:47.104256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:32.981 [2024-11-19 10:07:47.104285] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:32.981 [2024-11-19 10:07:47.104301] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:32.981 [2024-11-19 10:07:47.104318] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:32.981 BaseBdev1 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.981 10:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.917 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.176 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.176 "name": "raid_bdev1", 00:13:34.176 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:34.176 "strip_size_kb": 0, 00:13:34.176 "state": "online", 00:13:34.176 "raid_level": "raid1", 00:13:34.176 "superblock": true, 00:13:34.176 "num_base_bdevs": 2, 00:13:34.176 "num_base_bdevs_discovered": 1, 00:13:34.176 "num_base_bdevs_operational": 1, 00:13:34.176 "base_bdevs_list": [ 00:13:34.176 { 00:13:34.176 "name": null, 00:13:34.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.176 "is_configured": false, 00:13:34.176 "data_offset": 0, 00:13:34.176 "data_size": 63488 00:13:34.176 }, 00:13:34.176 { 00:13:34.176 "name": "BaseBdev2", 00:13:34.176 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:34.176 "is_configured": true, 00:13:34.176 "data_offset": 2048, 00:13:34.176 "data_size": 63488 00:13:34.176 } 00:13:34.176 ] 00:13:34.176 }' 00:13:34.176 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.176 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.435 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.693 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.693 "name": "raid_bdev1", 00:13:34.693 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:34.693 "strip_size_kb": 0, 00:13:34.693 "state": "online", 00:13:34.693 "raid_level": "raid1", 00:13:34.693 "superblock": true, 00:13:34.694 "num_base_bdevs": 2, 00:13:34.694 "num_base_bdevs_discovered": 1, 00:13:34.694 "num_base_bdevs_operational": 1, 00:13:34.694 "base_bdevs_list": [ 00:13:34.694 { 00:13:34.694 "name": null, 00:13:34.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.694 "is_configured": false, 00:13:34.694 "data_offset": 0, 00:13:34.694 "data_size": 63488 00:13:34.694 }, 00:13:34.694 { 00:13:34.694 "name": "BaseBdev2", 00:13:34.694 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:34.694 "is_configured": true, 00:13:34.694 "data_offset": 2048, 00:13:34.694 "data_size": 63488 00:13:34.694 } 00:13:34.694 ] 00:13:34.694 }' 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.694 [2024-11-19 10:07:48.807713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.694 [2024-11-19 10:07:48.808033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.694 [2024-11-19 10:07:48.808060] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:34.694 request: 00:13:34.694 { 00:13:34.694 "base_bdev": "BaseBdev1", 00:13:34.694 "raid_bdev": "raid_bdev1", 00:13:34.694 "method": "bdev_raid_add_base_bdev", 00:13:34.694 "req_id": 1 00:13:34.694 } 00:13:34.694 Got JSON-RPC error response 00:13:34.694 response: 00:13:34.694 { 00:13:34.694 "code": -22, 00:13:34.694 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:34.694 } 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.694 10:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.642 10:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.901 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.901 "name": "raid_bdev1", 00:13:35.901 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:35.901 "strip_size_kb": 0, 00:13:35.901 "state": "online", 00:13:35.901 "raid_level": "raid1", 00:13:35.901 "superblock": true, 00:13:35.901 "num_base_bdevs": 2, 00:13:35.901 "num_base_bdevs_discovered": 1, 00:13:35.901 "num_base_bdevs_operational": 1, 00:13:35.901 "base_bdevs_list": [ 00:13:35.901 { 00:13:35.901 "name": null, 00:13:35.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.901 "is_configured": false, 00:13:35.901 "data_offset": 0, 00:13:35.901 "data_size": 63488 00:13:35.901 }, 00:13:35.901 { 00:13:35.901 "name": "BaseBdev2", 00:13:35.901 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:35.901 "is_configured": true, 00:13:35.901 "data_offset": 2048, 00:13:35.901 "data_size": 63488 00:13:35.901 } 00:13:35.901 ] 00:13:35.901 }' 00:13:35.901 10:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.901 10:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.159 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.418 "name": "raid_bdev1", 00:13:36.418 "uuid": "6ef07c4a-df0b-4b3c-b86f-f4ac0b325413", 00:13:36.418 "strip_size_kb": 0, 00:13:36.418 "state": "online", 00:13:36.418 "raid_level": "raid1", 00:13:36.418 "superblock": true, 00:13:36.418 "num_base_bdevs": 2, 00:13:36.418 "num_base_bdevs_discovered": 1, 00:13:36.418 "num_base_bdevs_operational": 1, 00:13:36.418 "base_bdevs_list": [ 00:13:36.418 { 00:13:36.418 "name": null, 00:13:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.418 "is_configured": false, 00:13:36.418 "data_offset": 0, 00:13:36.418 "data_size": 63488 00:13:36.418 }, 00:13:36.418 { 00:13:36.418 "name": "BaseBdev2", 00:13:36.418 "uuid": "f0527d52-49c5-59b6-94bb-8721be563545", 00:13:36.418 "is_configured": true, 00:13:36.418 "data_offset": 2048, 00:13:36.418 "data_size": 63488 00:13:36.418 } 00:13:36.418 ] 00:13:36.418 }' 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75834 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75834 ']' 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75834 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.418 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75834 00:13:36.418 killing process with pid 75834 00:13:36.418 Received shutdown signal, test time was about 60.000000 seconds 00:13:36.418 00:13:36.418 Latency(us) 00:13:36.418 [2024-11-19T10:07:50.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.418 [2024-11-19T10:07:50.650Z] =================================================================================================================== 00:13:36.418 [2024-11-19T10:07:50.651Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.419 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.419 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.419 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75834' 00:13:36.419 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75834 00:13:36.419 [2024-11-19 10:07:50.552327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.419 10:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75834 00:13:36.419 [2024-11-19 10:07:50.552526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.419 [2024-11-19 10:07:50.552609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.419 [2024-11-19 10:07:50.552637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:36.677 [2024-11-19 10:07:50.849114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.056 10:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:38.056 00:13:38.056 real 0m28.042s 00:13:38.056 user 0m34.265s 00:13:38.056 sys 0m4.637s 00:13:38.056 10:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.056 10:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.056 ************************************ 00:13:38.056 END TEST raid_rebuild_test_sb 00:13:38.056 ************************************ 00:13:38.056 10:07:52 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:38.056 10:07:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:38.056 10:07:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.056 10:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.056 ************************************ 00:13:38.056 START TEST raid_rebuild_test_io 00:13:38.056 ************************************ 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76604 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76604 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76604 ']' 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.056 10:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.056 [2024-11-19 10:07:52.204474] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:13:38.056 [2024-11-19 10:07:52.204676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76604 ] 00:13:38.056 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:38.056 Zero copy mechanism will not be used. 00:13:38.314 [2024-11-19 10:07:52.392872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.573 [2024-11-19 10:07:52.575438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.830 [2024-11-19 10:07:52.818535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.830 [2024-11-19 10:07:52.818665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.088 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.089 BaseBdev1_malloc 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.089 [2024-11-19 10:07:53.283333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:39.089 [2024-11-19 10:07:53.283482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.089 [2024-11-19 10:07:53.283526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:39.089 [2024-11-19 10:07:53.283547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.089 [2024-11-19 10:07:53.286925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.089 [2024-11-19 10:07:53.287027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.089 BaseBdev1 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.089 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.347 BaseBdev2_malloc 00:13:39.347 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.347 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:39.347 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.347 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.347 [2024-11-19 10:07:53.345102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:39.347 [2024-11-19 10:07:53.345262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.347 [2024-11-19 10:07:53.345301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:39.348 [2024-11-19 10:07:53.345324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.348 [2024-11-19 10:07:53.348671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.348 [2024-11-19 10:07:53.348768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:39.348 BaseBdev2 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 spare_malloc 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 spare_delay 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 [2024-11-19 10:07:53.430859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.348 [2024-11-19 10:07:53.431006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.348 [2024-11-19 10:07:53.431049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:39.348 [2024-11-19 10:07:53.431086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.348 [2024-11-19 10:07:53.434683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.348 [2024-11-19 10:07:53.434804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.348 spare 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 [2024-11-19 10:07:53.443229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.348 [2024-11-19 10:07:53.446193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.348 [2024-11-19 10:07:53.446396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:39.348 [2024-11-19 10:07:53.446420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:39.348 [2024-11-19 10:07:53.446869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:39.348 [2024-11-19 10:07:53.447121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:39.348 [2024-11-19 10:07:53.447140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:39.348 [2024-11-19 10:07:53.447453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.348 "name": "raid_bdev1", 00:13:39.348 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:39.348 "strip_size_kb": 0, 00:13:39.348 "state": "online", 00:13:39.348 "raid_level": "raid1", 00:13:39.348 "superblock": false, 00:13:39.348 "num_base_bdevs": 2, 00:13:39.348 "num_base_bdevs_discovered": 2, 00:13:39.348 "num_base_bdevs_operational": 2, 00:13:39.348 "base_bdevs_list": [ 00:13:39.348 { 00:13:39.348 "name": "BaseBdev1", 00:13:39.348 "uuid": "66663692-0d3c-50eb-b73d-6458071d4699", 00:13:39.348 "is_configured": true, 00:13:39.348 "data_offset": 0, 00:13:39.348 "data_size": 65536 00:13:39.348 }, 00:13:39.348 { 00:13:39.348 "name": "BaseBdev2", 00:13:39.348 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:39.348 "is_configured": true, 00:13:39.348 "data_offset": 0, 00:13:39.348 "data_size": 65536 00:13:39.348 } 00:13:39.348 ] 00:13:39.348 }' 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.348 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.918 [2024-11-19 10:07:53.960203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:39.918 10:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.918 [2024-11-19 10:07:54.075618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.918 "name": "raid_bdev1", 00:13:39.918 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:39.918 "strip_size_kb": 0, 00:13:39.918 "state": "online", 00:13:39.918 "raid_level": "raid1", 00:13:39.918 "superblock": false, 00:13:39.918 "num_base_bdevs": 2, 00:13:39.918 "num_base_bdevs_discovered": 1, 00:13:39.918 "num_base_bdevs_operational": 1, 00:13:39.918 "base_bdevs_list": [ 00:13:39.918 { 00:13:39.918 "name": null, 00:13:39.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.918 "is_configured": false, 00:13:39.918 "data_offset": 0, 00:13:39.918 "data_size": 65536 00:13:39.918 }, 00:13:39.918 { 00:13:39.918 "name": "BaseBdev2", 00:13:39.918 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:39.918 "is_configured": true, 00:13:39.918 "data_offset": 0, 00:13:39.918 "data_size": 65536 00:13:39.918 } 00:13:39.918 ] 00:13:39.918 }' 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.918 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.177 [2024-11-19 10:07:54.225106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:40.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.177 Zero copy mechanism will not be used. 00:13:40.177 Running I/O for 60 seconds... 00:13:40.434 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.434 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.434 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.434 [2024-11-19 10:07:54.620554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.693 10:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.693 10:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:40.693 [2024-11-19 10:07:54.693957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:40.693 [2024-11-19 10:07:54.696825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.693 [2024-11-19 10:07:54.826941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.693 [2024-11-19 10:07:54.828104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.952 [2024-11-19 10:07:55.050937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:40.952 [2024-11-19 10:07:55.051464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:41.213 114.00 IOPS, 342.00 MiB/s [2024-11-19T10:07:55.445Z] [2024-11-19 10:07:55.417082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:41.213 [2024-11-19 10:07:55.418021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:41.475 [2024-11-19 10:07:55.623152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.475 [2024-11-19 10:07:55.623692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.475 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.475 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.475 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.475 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.476 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.476 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.476 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.476 10:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.476 10:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.476 10:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.734 "name": "raid_bdev1", 00:13:41.734 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:41.734 "strip_size_kb": 0, 00:13:41.734 "state": "online", 00:13:41.734 "raid_level": "raid1", 00:13:41.734 "superblock": false, 00:13:41.734 "num_base_bdevs": 2, 00:13:41.734 "num_base_bdevs_discovered": 2, 00:13:41.734 "num_base_bdevs_operational": 2, 00:13:41.734 "process": { 00:13:41.734 "type": "rebuild", 00:13:41.734 "target": "spare", 00:13:41.734 "progress": { 00:13:41.734 "blocks": 10240, 00:13:41.734 "percent": 15 00:13:41.734 } 00:13:41.734 }, 00:13:41.734 "base_bdevs_list": [ 00:13:41.734 { 00:13:41.734 "name": "spare", 00:13:41.734 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:41.734 "is_configured": true, 00:13:41.734 "data_offset": 0, 00:13:41.734 "data_size": 65536 00:13:41.734 }, 00:13:41.734 { 00:13:41.734 "name": "BaseBdev2", 00:13:41.734 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:41.734 "is_configured": true, 00:13:41.734 "data_offset": 0, 00:13:41.734 "data_size": 65536 00:13:41.734 } 00:13:41.734 ] 00:13:41.734 }' 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.734 10:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.734 [2024-11-19 10:07:55.854341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.734 [2024-11-19 10:07:55.963905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:41.994 [2024-11-19 10:07:56.074319] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.994 [2024-11-19 10:07:56.087438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.994 [2024-11-19 10:07:56.087562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.994 [2024-11-19 10:07:56.087584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.994 [2024-11-19 10:07:56.144937] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.994 "name": "raid_bdev1", 00:13:41.994 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:41.994 "strip_size_kb": 0, 00:13:41.994 "state": "online", 00:13:41.994 "raid_level": "raid1", 00:13:41.994 "superblock": false, 00:13:41.994 "num_base_bdevs": 2, 00:13:41.994 "num_base_bdevs_discovered": 1, 00:13:41.994 "num_base_bdevs_operational": 1, 00:13:41.994 "base_bdevs_list": [ 00:13:41.994 { 00:13:41.994 "name": null, 00:13:41.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.994 "is_configured": false, 00:13:41.994 "data_offset": 0, 00:13:41.994 "data_size": 65536 00:13:41.994 }, 00:13:41.994 { 00:13:41.994 "name": "BaseBdev2", 00:13:41.994 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:41.994 "is_configured": true, 00:13:41.994 "data_offset": 0, 00:13:41.994 "data_size": 65536 00:13:41.994 } 00:13:41.994 ] 00:13:41.994 }' 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.994 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 85.00 IOPS, 255.00 MiB/s [2024-11-19T10:07:56.743Z] 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.511 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.511 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.511 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.512 "name": "raid_bdev1", 00:13:42.512 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:42.512 "strip_size_kb": 0, 00:13:42.512 "state": "online", 00:13:42.512 "raid_level": "raid1", 00:13:42.512 "superblock": false, 00:13:42.512 "num_base_bdevs": 2, 00:13:42.512 "num_base_bdevs_discovered": 1, 00:13:42.512 "num_base_bdevs_operational": 1, 00:13:42.512 "base_bdevs_list": [ 00:13:42.512 { 00:13:42.512 "name": null, 00:13:42.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.512 "is_configured": false, 00:13:42.512 "data_offset": 0, 00:13:42.512 "data_size": 65536 00:13:42.512 }, 00:13:42.512 { 00:13:42.512 "name": "BaseBdev2", 00:13:42.512 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:42.512 "is_configured": true, 00:13:42.512 "data_offset": 0, 00:13:42.512 "data_size": 65536 00:13:42.512 } 00:13:42.512 ] 00:13:42.512 }' 00:13:42.512 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.770 [2024-11-19 10:07:56.854524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.770 10:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:42.770 [2024-11-19 10:07:56.913341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:42.770 [2024-11-19 10:07:56.916566] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.029 [2024-11-19 10:07:57.074462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:43.029 [2024-11-19 10:07:57.075408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:43.288 111.00 IOPS, 333.00 MiB/s [2024-11-19T10:07:57.520Z] [2024-11-19 10:07:57.290484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.288 [2024-11-19 10:07:57.291314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.547 [2024-11-19 10:07:57.629375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.547 [2024-11-19 10:07:57.630601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.806 [2024-11-19 10:07:57.862872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.806 [2024-11-19 10:07:57.863738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.806 "name": "raid_bdev1", 00:13:43.806 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:43.806 "strip_size_kb": 0, 00:13:43.806 "state": "online", 00:13:43.806 "raid_level": "raid1", 00:13:43.806 "superblock": false, 00:13:43.806 "num_base_bdevs": 2, 00:13:43.806 "num_base_bdevs_discovered": 2, 00:13:43.806 "num_base_bdevs_operational": 2, 00:13:43.806 "process": { 00:13:43.806 "type": "rebuild", 00:13:43.806 "target": "spare", 00:13:43.806 "progress": { 00:13:43.806 "blocks": 10240, 00:13:43.806 "percent": 15 00:13:43.806 } 00:13:43.806 }, 00:13:43.806 "base_bdevs_list": [ 00:13:43.806 { 00:13:43.806 "name": "spare", 00:13:43.806 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:43.806 "is_configured": true, 00:13:43.806 "data_offset": 0, 00:13:43.806 "data_size": 65536 00:13:43.806 }, 00:13:43.806 { 00:13:43.806 "name": "BaseBdev2", 00:13:43.806 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:43.806 "is_configured": true, 00:13:43.806 "data_offset": 0, 00:13:43.806 "data_size": 65536 00:13:43.806 } 00:13:43.806 ] 00:13:43.806 }' 00:13:43.806 10:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.806 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.806 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.065 "name": "raid_bdev1", 00:13:44.065 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:44.065 "strip_size_kb": 0, 00:13:44.065 "state": "online", 00:13:44.065 "raid_level": "raid1", 00:13:44.065 "superblock": false, 00:13:44.065 "num_base_bdevs": 2, 00:13:44.065 "num_base_bdevs_discovered": 2, 00:13:44.065 "num_base_bdevs_operational": 2, 00:13:44.065 "process": { 00:13:44.065 "type": "rebuild", 00:13:44.065 "target": "spare", 00:13:44.065 "progress": { 00:13:44.065 "blocks": 10240, 00:13:44.065 "percent": 15 00:13:44.065 } 00:13:44.065 }, 00:13:44.065 "base_bdevs_list": [ 00:13:44.065 { 00:13:44.065 "name": "spare", 00:13:44.065 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:44.065 "is_configured": true, 00:13:44.065 "data_offset": 0, 00:13:44.065 "data_size": 65536 00:13:44.065 }, 00:13:44.065 { 00:13:44.065 "name": "BaseBdev2", 00:13:44.065 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:44.065 "is_configured": true, 00:13:44.065 "data_offset": 0, 00:13:44.065 "data_size": 65536 00:13:44.065 } 00:13:44.065 ] 00:13:44.065 }' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.065 [2024-11-19 10:07:58.205235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:44.065 [2024-11-19 10:07:58.206307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:44.065 99.00 IOPS, 297.00 MiB/s [2024-11-19T10:07:58.297Z] 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.065 10:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.324 [2024-11-19 10:07:58.420958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.260 [2024-11-19 10:07:59.152656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:45.260 90.60 IOPS, 271.80 MiB/s [2024-11-19T10:07:59.492Z] 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.260 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.260 "name": "raid_bdev1", 00:13:45.260 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:45.260 "strip_size_kb": 0, 00:13:45.260 "state": "online", 00:13:45.260 "raid_level": "raid1", 00:13:45.260 "superblock": false, 00:13:45.260 "num_base_bdevs": 2, 00:13:45.261 "num_base_bdevs_discovered": 2, 00:13:45.261 "num_base_bdevs_operational": 2, 00:13:45.261 "process": { 00:13:45.261 "type": "rebuild", 00:13:45.261 "target": "spare", 00:13:45.261 "progress": { 00:13:45.261 "blocks": 26624, 00:13:45.261 "percent": 40 00:13:45.261 } 00:13:45.261 }, 00:13:45.261 "base_bdevs_list": [ 00:13:45.261 { 00:13:45.261 "name": "spare", 00:13:45.261 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:45.261 "is_configured": true, 00:13:45.261 "data_offset": 0, 00:13:45.261 "data_size": 65536 00:13:45.261 }, 00:13:45.261 { 00:13:45.261 "name": "BaseBdev2", 00:13:45.261 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:45.261 "is_configured": true, 00:13:45.261 "data_offset": 0, 00:13:45.261 "data_size": 65536 00:13:45.261 } 00:13:45.261 ] 00:13:45.261 }' 00:13:45.261 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.261 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.261 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.261 [2024-11-19 10:07:59.407733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:45.261 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.261 10:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.828 [2024-11-19 10:07:59.763682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:46.087 [2024-11-19 10:08:00.098569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:46.087 83.33 IOPS, 250.00 MiB/s [2024-11-19T10:08:00.319Z] [2024-11-19 10:08:00.314741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.346 "name": "raid_bdev1", 00:13:46.346 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:46.346 "strip_size_kb": 0, 00:13:46.346 "state": "online", 00:13:46.346 "raid_level": "raid1", 00:13:46.346 "superblock": false, 00:13:46.346 "num_base_bdevs": 2, 00:13:46.346 "num_base_bdevs_discovered": 2, 00:13:46.346 "num_base_bdevs_operational": 2, 00:13:46.346 "process": { 00:13:46.346 "type": "rebuild", 00:13:46.346 "target": "spare", 00:13:46.346 "progress": { 00:13:46.346 "blocks": 40960, 00:13:46.346 "percent": 62 00:13:46.346 } 00:13:46.346 }, 00:13:46.346 "base_bdevs_list": [ 00:13:46.346 { 00:13:46.346 "name": "spare", 00:13:46.346 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:46.346 "is_configured": true, 00:13:46.346 "data_offset": 0, 00:13:46.346 "data_size": 65536 00:13:46.346 }, 00:13:46.346 { 00:13:46.346 "name": "BaseBdev2", 00:13:46.346 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:46.346 "is_configured": true, 00:13:46.346 "data_offset": 0, 00:13:46.346 "data_size": 65536 00:13:46.346 } 00:13:46.346 ] 00:13:46.346 }' 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.346 10:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.605 [2024-11-19 10:08:00.651903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:46.863 [2024-11-19 10:08:01.010309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:47.382 77.43 IOPS, 232.29 MiB/s [2024-11-19T10:08:01.614Z] 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.382 10:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.641 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.641 "name": "raid_bdev1", 00:13:47.641 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:47.641 "strip_size_kb": 0, 00:13:47.641 "state": "online", 00:13:47.641 "raid_level": "raid1", 00:13:47.641 "superblock": false, 00:13:47.641 "num_base_bdevs": 2, 00:13:47.641 "num_base_bdevs_discovered": 2, 00:13:47.641 "num_base_bdevs_operational": 2, 00:13:47.641 "process": { 00:13:47.641 "type": "rebuild", 00:13:47.641 "target": "spare", 00:13:47.641 "progress": { 00:13:47.641 "blocks": 59392, 00:13:47.641 "percent": 90 00:13:47.641 } 00:13:47.641 }, 00:13:47.641 "base_bdevs_list": [ 00:13:47.641 { 00:13:47.641 "name": "spare", 00:13:47.641 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:47.641 "is_configured": true, 00:13:47.641 "data_offset": 0, 00:13:47.641 "data_size": 65536 00:13:47.641 }, 00:13:47.641 { 00:13:47.641 "name": "BaseBdev2", 00:13:47.641 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:47.641 "is_configured": true, 00:13:47.641 "data_offset": 0, 00:13:47.641 "data_size": 65536 00:13:47.641 } 00:13:47.641 ] 00:13:47.641 }' 00:13:47.641 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.641 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.641 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.641 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.641 10:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.900 [2024-11-19 10:08:01.897101] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:47.900 [2024-11-19 10:08:01.997012] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:47.900 [2024-11-19 10:08:02.000891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.725 72.00 IOPS, 216.00 MiB/s [2024-11-19T10:08:02.957Z] 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.725 "name": "raid_bdev1", 00:13:48.725 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:48.725 "strip_size_kb": 0, 00:13:48.725 "state": "online", 00:13:48.725 "raid_level": "raid1", 00:13:48.725 "superblock": false, 00:13:48.725 "num_base_bdevs": 2, 00:13:48.725 "num_base_bdevs_discovered": 2, 00:13:48.725 "num_base_bdevs_operational": 2, 00:13:48.725 "base_bdevs_list": [ 00:13:48.725 { 00:13:48.725 "name": "spare", 00:13:48.725 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:48.725 "is_configured": true, 00:13:48.725 "data_offset": 0, 00:13:48.725 "data_size": 65536 00:13:48.725 }, 00:13:48.725 { 00:13:48.725 "name": "BaseBdev2", 00:13:48.725 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:48.725 "is_configured": true, 00:13:48.725 "data_offset": 0, 00:13:48.725 "data_size": 65536 00:13:48.725 } 00:13:48.725 ] 00:13:48.725 }' 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.725 10:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.984 10:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.984 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.984 "name": "raid_bdev1", 00:13:48.984 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:48.984 "strip_size_kb": 0, 00:13:48.984 "state": "online", 00:13:48.984 "raid_level": "raid1", 00:13:48.984 "superblock": false, 00:13:48.984 "num_base_bdevs": 2, 00:13:48.984 "num_base_bdevs_discovered": 2, 00:13:48.984 "num_base_bdevs_operational": 2, 00:13:48.984 "base_bdevs_list": [ 00:13:48.984 { 00:13:48.984 "name": "spare", 00:13:48.984 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:48.984 "is_configured": true, 00:13:48.984 "data_offset": 0, 00:13:48.984 "data_size": 65536 00:13:48.984 }, 00:13:48.984 { 00:13:48.984 "name": "BaseBdev2", 00:13:48.984 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:48.984 "is_configured": true, 00:13:48.984 "data_offset": 0, 00:13:48.984 "data_size": 65536 00:13:48.984 } 00:13:48.984 ] 00:13:48.984 }' 00:13:48.984 10:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.984 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.984 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.984 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.984 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.984 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.984 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.985 "name": "raid_bdev1", 00:13:48.985 "uuid": "76385618-299c-41fd-a742-4f3be3016073", 00:13:48.985 "strip_size_kb": 0, 00:13:48.985 "state": "online", 00:13:48.985 "raid_level": "raid1", 00:13:48.985 "superblock": false, 00:13:48.985 "num_base_bdevs": 2, 00:13:48.985 "num_base_bdevs_discovered": 2, 00:13:48.985 "num_base_bdevs_operational": 2, 00:13:48.985 "base_bdevs_list": [ 00:13:48.985 { 00:13:48.985 "name": "spare", 00:13:48.985 "uuid": "88f3f06d-90b3-5498-af3b-a3af2932052b", 00:13:48.985 "is_configured": true, 00:13:48.985 "data_offset": 0, 00:13:48.985 "data_size": 65536 00:13:48.985 }, 00:13:48.985 { 00:13:48.985 "name": "BaseBdev2", 00:13:48.985 "uuid": "ffb7ce10-1fa1-5efd-8174-ef6e55ae2bc5", 00:13:48.985 "is_configured": true, 00:13:48.985 "data_offset": 0, 00:13:48.985 "data_size": 65536 00:13:48.985 } 00:13:48.985 ] 00:13:48.985 }' 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.985 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.506 67.44 IOPS, 202.33 MiB/s [2024-11-19T10:08:03.738Z] 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.506 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.507 [2024-11-19 10:08:03.650632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.507 [2024-11-19 10:08:03.650696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.507 00:13:49.507 Latency(us) 00:13:49.507 [2024-11-19T10:08:03.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.507 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:49.507 raid_bdev1 : 9.45 65.64 196.91 0.00 0.00 19413.86 340.71 114866.73 00:13:49.507 [2024-11-19T10:08:03.739Z] =================================================================================================================== 00:13:49.507 [2024-11-19T10:08:03.739Z] Total : 65.64 196.91 0.00 0.00 19413.86 340.71 114866.73 00:13:49.507 [2024-11-19 10:08:03.697222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.507 [2024-11-19 10:08:03.697620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.507 { 00:13:49.507 "results": [ 00:13:49.507 { 00:13:49.507 "job": "raid_bdev1", 00:13:49.507 "core_mask": "0x1", 00:13:49.507 "workload": "randrw", 00:13:49.507 "percentage": 50, 00:13:49.507 "status": "finished", 00:13:49.507 "queue_depth": 2, 00:13:49.507 "io_size": 3145728, 00:13:49.507 "runtime": 9.446168, 00:13:49.507 "iops": 65.63508080737078, 00:13:49.507 "mibps": 196.90524242211234, 00:13:49.507 "io_failed": 0, 00:13:49.507 "io_timeout": 0, 00:13:49.507 "avg_latency_us": 19413.85684457478, 00:13:49.507 "min_latency_us": 340.71272727272725, 00:13:49.507 "max_latency_us": 114866.73454545454 00:13:49.507 } 00:13:49.507 ], 00:13:49.507 "core_count": 1 00:13:49.507 } 00:13:49.507 [2024-11-19 10:08:03.697898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.507 [2024-11-19 10:08:03.697930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.507 10:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.765 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.766 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.766 10:08:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:50.025 /dev/nbd0 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.025 1+0 records in 00:13:50.025 1+0 records out 00:13:50.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502517 s, 8.2 MB/s 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.025 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:50.284 /dev/nbd1 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.284 1+0 records in 00:13:50.284 1+0 records out 00:13:50.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668348 s, 6.1 MB/s 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.284 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.544 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.802 10:08:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76604 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76604 ']' 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76604 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76604 00:13:51.478 killing process with pid 76604 00:13:51.478 Received shutdown signal, test time was about 11.148659 seconds 00:13:51.478 00:13:51.478 Latency(us) 00:13:51.478 [2024-11-19T10:08:05.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.478 [2024-11-19T10:08:05.710Z] =================================================================================================================== 00:13:51.478 [2024-11-19T10:08:05.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76604' 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76604 00:13:51.478 10:08:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76604 00:13:51.479 [2024-11-19 10:08:05.377035] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.479 [2024-11-19 10:08:05.603975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.856 ************************************ 00:13:52.856 END TEST raid_rebuild_test_io 00:13:52.856 ************************************ 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:52.856 00:13:52.856 real 0m14.745s 00:13:52.856 user 0m19.084s 00:13:52.856 sys 0m1.669s 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.856 10:08:06 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:52.856 10:08:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:52.856 10:08:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.856 10:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.856 ************************************ 00:13:52.856 START TEST raid_rebuild_test_sb_io 00:13:52.856 ************************************ 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77011 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77011 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77011 ']' 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.856 10:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.856 [2024-11-19 10:08:06.990802] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:13:52.856 [2024-11-19 10:08:06.991268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77011 ] 00:13:52.856 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.856 Zero copy mechanism will not be used. 00:13:53.115 [2024-11-19 10:08:07.177426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.372 [2024-11-19 10:08:07.365633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.372 [2024-11-19 10:08:07.603100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.372 [2024-11-19 10:08:07.603212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.939 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.939 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:53.939 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.939 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.939 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.939 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.939 BaseBdev1_malloc 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.940 [2024-11-19 10:08:08.105998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.940 [2024-11-19 10:08:08.106150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.940 [2024-11-19 10:08:08.106197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:53.940 [2024-11-19 10:08:08.106220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.940 [2024-11-19 10:08:08.109650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.940 [2024-11-19 10:08:08.109931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.940 BaseBdev1 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.940 BaseBdev2_malloc 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.940 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.940 [2024-11-19 10:08:08.167354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:53.940 [2024-11-19 10:08:08.167504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.940 [2024-11-19 10:08:08.167543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:53.940 [2024-11-19 10:08:08.167569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.940 [2024-11-19 10:08:08.170973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.940 [2024-11-19 10:08:08.171057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.199 BaseBdev2 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 spare_malloc 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 spare_delay 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 [2024-11-19 10:08:08.247343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.199 [2024-11-19 10:08:08.247496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.199 [2024-11-19 10:08:08.247541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:54.199 [2024-11-19 10:08:08.247563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.199 [2024-11-19 10:08:08.251369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.199 [2024-11-19 10:08:08.251729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.199 spare 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 [2024-11-19 10:08:08.260392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.199 [2024-11-19 10:08:08.263343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.199 [2024-11-19 10:08:08.263686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.199 [2024-11-19 10:08:08.263717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.199 [2024-11-19 10:08:08.264201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:54.199 [2024-11-19 10:08:08.264476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.199 [2024-11-19 10:08:08.264502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.199 [2024-11-19 10:08:08.264863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.199 "name": "raid_bdev1", 00:13:54.199 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:54.199 "strip_size_kb": 0, 00:13:54.199 "state": "online", 00:13:54.199 "raid_level": "raid1", 00:13:54.199 "superblock": true, 00:13:54.199 "num_base_bdevs": 2, 00:13:54.199 "num_base_bdevs_discovered": 2, 00:13:54.199 "num_base_bdevs_operational": 2, 00:13:54.199 "base_bdevs_list": [ 00:13:54.199 { 00:13:54.199 "name": "BaseBdev1", 00:13:54.199 "uuid": "b96b77c3-7b3a-5e30-8577-d5156bd29171", 00:13:54.199 "is_configured": true, 00:13:54.199 "data_offset": 2048, 00:13:54.199 "data_size": 63488 00:13:54.199 }, 00:13:54.199 { 00:13:54.199 "name": "BaseBdev2", 00:13:54.199 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:54.199 "is_configured": true, 00:13:54.199 "data_offset": 2048, 00:13:54.199 "data_size": 63488 00:13:54.199 } 00:13:54.199 ] 00:13:54.199 }' 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.199 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:54.766 [2024-11-19 10:08:08.769329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 [2024-11-19 10:08:08.864967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.766 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.766 "name": "raid_bdev1", 00:13:54.766 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:54.767 "strip_size_kb": 0, 00:13:54.767 "state": "online", 00:13:54.767 "raid_level": "raid1", 00:13:54.767 "superblock": true, 00:13:54.767 "num_base_bdevs": 2, 00:13:54.767 "num_base_bdevs_discovered": 1, 00:13:54.767 "num_base_bdevs_operational": 1, 00:13:54.767 "base_bdevs_list": [ 00:13:54.767 { 00:13:54.767 "name": null, 00:13:54.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.767 "is_configured": false, 00:13:54.767 "data_offset": 0, 00:13:54.767 "data_size": 63488 00:13:54.767 }, 00:13:54.767 { 00:13:54.767 "name": "BaseBdev2", 00:13:54.767 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:54.767 "is_configured": true, 00:13:54.767 "data_offset": 2048, 00:13:54.767 "data_size": 63488 00:13:54.767 } 00:13:54.767 ] 00:13:54.767 }' 00:13:54.767 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.767 10:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.025 [2024-11-19 10:08:09.022454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:55.025 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.025 Zero copy mechanism will not be used. 00:13:55.025 Running I/O for 60 seconds... 00:13:55.283 10:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.283 10:08:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.283 10:08:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.283 [2024-11-19 10:08:09.369719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.283 10:08:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.283 10:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:55.283 [2024-11-19 10:08:09.443888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:55.283 [2024-11-19 10:08:09.446861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.541 [2024-11-19 10:08:09.576491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.541 [2024-11-19 10:08:09.577476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.799 [2024-11-19 10:08:09.792718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.799 [2024-11-19 10:08:09.793571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.058 110.00 IOPS, 330.00 MiB/s [2024-11-19T10:08:10.290Z] [2024-11-19 10:08:10.122551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:56.058 [2024-11-19 10:08:10.123825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:56.316 [2024-11-19 10:08:10.328942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.316 "name": "raid_bdev1", 00:13:56.316 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:56.316 "strip_size_kb": 0, 00:13:56.316 "state": "online", 00:13:56.316 "raid_level": "raid1", 00:13:56.316 "superblock": true, 00:13:56.316 "num_base_bdevs": 2, 00:13:56.316 "num_base_bdevs_discovered": 2, 00:13:56.316 "num_base_bdevs_operational": 2, 00:13:56.316 "process": { 00:13:56.316 "type": "rebuild", 00:13:56.316 "target": "spare", 00:13:56.316 "progress": { 00:13:56.316 "blocks": 10240, 00:13:56.316 "percent": 16 00:13:56.316 } 00:13:56.316 }, 00:13:56.316 "base_bdevs_list": [ 00:13:56.316 { 00:13:56.316 "name": "spare", 00:13:56.316 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:13:56.316 "is_configured": true, 00:13:56.316 "data_offset": 2048, 00:13:56.316 "data_size": 63488 00:13:56.316 }, 00:13:56.316 { 00:13:56.316 "name": "BaseBdev2", 00:13:56.316 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:56.316 "is_configured": true, 00:13:56.316 "data_offset": 2048, 00:13:56.316 "data_size": 63488 00:13:56.316 } 00:13:56.316 ] 00:13:56.316 }' 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.316 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.574 [2024-11-19 10:08:10.575283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:56.574 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.574 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:56.574 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.574 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.574 [2024-11-19 10:08:10.597718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.574 [2024-11-19 10:08:10.697558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:56.574 [2024-11-19 10:08:10.800350] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.832 [2024-11-19 10:08:10.813577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.832 [2024-11-19 10:08:10.813713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.832 [2024-11-19 10:08:10.813738] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.832 [2024-11-19 10:08:10.854891] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.832 "name": "raid_bdev1", 00:13:56.832 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:56.832 "strip_size_kb": 0, 00:13:56.832 "state": "online", 00:13:56.832 "raid_level": "raid1", 00:13:56.832 "superblock": true, 00:13:56.832 "num_base_bdevs": 2, 00:13:56.832 "num_base_bdevs_discovered": 1, 00:13:56.832 "num_base_bdevs_operational": 1, 00:13:56.832 "base_bdevs_list": [ 00:13:56.832 { 00:13:56.832 "name": null, 00:13:56.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.832 "is_configured": false, 00:13:56.832 "data_offset": 0, 00:13:56.832 "data_size": 63488 00:13:56.832 }, 00:13:56.832 { 00:13:56.832 "name": "BaseBdev2", 00:13:56.832 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:56.832 "is_configured": true, 00:13:56.832 "data_offset": 2048, 00:13:56.832 "data_size": 63488 00:13:56.832 } 00:13:56.832 ] 00:13:56.832 }' 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.832 10:08:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.398 101.50 IOPS, 304.50 MiB/s [2024-11-19T10:08:11.630Z] 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.398 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.398 "name": "raid_bdev1", 00:13:57.398 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:57.398 "strip_size_kb": 0, 00:13:57.398 "state": "online", 00:13:57.398 "raid_level": "raid1", 00:13:57.398 "superblock": true, 00:13:57.398 "num_base_bdevs": 2, 00:13:57.398 "num_base_bdevs_discovered": 1, 00:13:57.398 "num_base_bdevs_operational": 1, 00:13:57.398 "base_bdevs_list": [ 00:13:57.398 { 00:13:57.398 "name": null, 00:13:57.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.398 "is_configured": false, 00:13:57.398 "data_offset": 0, 00:13:57.398 "data_size": 63488 00:13:57.398 }, 00:13:57.398 { 00:13:57.398 "name": "BaseBdev2", 00:13:57.399 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:57.399 "is_configured": true, 00:13:57.399 "data_offset": 2048, 00:13:57.399 "data_size": 63488 00:13:57.399 } 00:13:57.399 ] 00:13:57.399 }' 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.399 [2024-11-19 10:08:11.497030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.399 10:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:57.399 [2024-11-19 10:08:11.586817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:57.399 [2024-11-19 10:08:11.589634] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.657 [2024-11-19 10:08:11.728732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.914 [2024-11-19 10:08:11.959414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:57.914 [2024-11-19 10:08:11.960373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.171 112.33 IOPS, 337.00 MiB/s [2024-11-19T10:08:12.403Z] [2024-11-19 10:08:12.289803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.171 [2024-11-19 10:08:12.291123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.430 [2024-11-19 10:08:12.431469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.430 "name": "raid_bdev1", 00:13:58.430 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:58.430 "strip_size_kb": 0, 00:13:58.430 "state": "online", 00:13:58.430 "raid_level": "raid1", 00:13:58.430 "superblock": true, 00:13:58.430 "num_base_bdevs": 2, 00:13:58.430 "num_base_bdevs_discovered": 2, 00:13:58.430 "num_base_bdevs_operational": 2, 00:13:58.430 "process": { 00:13:58.430 "type": "rebuild", 00:13:58.430 "target": "spare", 00:13:58.430 "progress": { 00:13:58.430 "blocks": 12288, 00:13:58.430 "percent": 19 00:13:58.430 } 00:13:58.430 }, 00:13:58.430 "base_bdevs_list": [ 00:13:58.430 { 00:13:58.430 "name": "spare", 00:13:58.430 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:13:58.430 "is_configured": true, 00:13:58.430 "data_offset": 2048, 00:13:58.430 "data_size": 63488 00:13:58.430 }, 00:13:58.430 { 00:13:58.430 "name": "BaseBdev2", 00:13:58.430 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:58.430 "is_configured": true, 00:13:58.430 "data_offset": 2048, 00:13:58.430 "data_size": 63488 00:13:58.430 } 00:13:58.430 ] 00:13:58.430 }' 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.430 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.689 [2024-11-19 10:08:12.666348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:58.689 [2024-11-19 10:08:12.667271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:58.689 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:58.690 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.690 "name": "raid_bdev1", 00:13:58.690 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:58.690 "strip_size_kb": 0, 00:13:58.690 "state": "online", 00:13:58.690 "raid_level": "raid1", 00:13:58.690 "superblock": true, 00:13:58.690 "num_base_bdevs": 2, 00:13:58.690 "num_base_bdevs_discovered": 2, 00:13:58.690 "num_base_bdevs_operational": 2, 00:13:58.690 "process": { 00:13:58.690 "type": "rebuild", 00:13:58.690 "target": "spare", 00:13:58.690 "progress": { 00:13:58.690 "blocks": 14336, 00:13:58.690 "percent": 22 00:13:58.690 } 00:13:58.690 }, 00:13:58.690 "base_bdevs_list": [ 00:13:58.690 { 00:13:58.690 "name": "spare", 00:13:58.690 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:13:58.690 "is_configured": true, 00:13:58.690 "data_offset": 2048, 00:13:58.690 "data_size": 63488 00:13:58.690 }, 00:13:58.690 { 00:13:58.690 "name": "BaseBdev2", 00:13:58.690 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:58.690 "is_configured": true, 00:13:58.690 "data_offset": 2048, 00:13:58.690 "data_size": 63488 00:13:58.690 } 00:13:58.690 ] 00:13:58.690 }' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.690 10:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.690 [2024-11-19 10:08:12.907443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:58.690 [2024-11-19 10:08:12.908057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:59.232 103.75 IOPS, 311.25 MiB/s [2024-11-19T10:08:13.464Z] [2024-11-19 10:08:13.251889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:59.491 [2024-11-19 10:08:13.484341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.751 [2024-11-19 10:08:13.902161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:59.751 [2024-11-19 10:08:13.902701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.751 "name": "raid_bdev1", 00:13:59.751 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:13:59.751 "strip_size_kb": 0, 00:13:59.751 "state": "online", 00:13:59.751 "raid_level": "raid1", 00:13:59.751 "superblock": true, 00:13:59.751 "num_base_bdevs": 2, 00:13:59.751 "num_base_bdevs_discovered": 2, 00:13:59.751 "num_base_bdevs_operational": 2, 00:13:59.751 "process": { 00:13:59.751 "type": "rebuild", 00:13:59.751 "target": "spare", 00:13:59.751 "progress": { 00:13:59.751 "blocks": 26624, 00:13:59.751 "percent": 41 00:13:59.751 } 00:13:59.751 }, 00:13:59.751 "base_bdevs_list": [ 00:13:59.751 { 00:13:59.751 "name": "spare", 00:13:59.751 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:13:59.751 "is_configured": true, 00:13:59.751 "data_offset": 2048, 00:13:59.751 "data_size": 63488 00:13:59.751 }, 00:13:59.751 { 00:13:59.751 "name": "BaseBdev2", 00:13:59.751 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:13:59.751 "is_configured": true, 00:13:59.751 "data_offset": 2048, 00:13:59.751 "data_size": 63488 00:13:59.751 } 00:13:59.751 ] 00:13:59.751 }' 00:13:59.751 10:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.011 10:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.011 10:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.011 93.60 IOPS, 280.80 MiB/s [2024-11-19T10:08:14.243Z] 10:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.011 10:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.948 84.33 IOPS, 253.00 MiB/s [2024-11-19T10:08:15.180Z] 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.949 [2024-11-19 10:08:15.106660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.949 "name": "raid_bdev1", 00:14:00.949 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:00.949 "strip_size_kb": 0, 00:14:00.949 "state": "online", 00:14:00.949 "raid_level": "raid1", 00:14:00.949 "superblock": true, 00:14:00.949 "num_base_bdevs": 2, 00:14:00.949 "num_base_bdevs_discovered": 2, 00:14:00.949 "num_base_bdevs_operational": 2, 00:14:00.949 "process": { 00:14:00.949 "type": "rebuild", 00:14:00.949 "target": "spare", 00:14:00.949 "progress": { 00:14:00.949 "blocks": 45056, 00:14:00.949 "percent": 70 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 "base_bdevs_list": [ 00:14:00.949 { 00:14:00.949 "name": "spare", 00:14:00.949 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:00.949 "is_configured": true, 00:14:00.949 "data_offset": 2048, 00:14:00.949 "data_size": 63488 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "name": "BaseBdev2", 00:14:00.949 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:00.949 "is_configured": true, 00:14:00.949 "data_offset": 2048, 00:14:00.949 "data_size": 63488 00:14:00.949 } 00:14:00.949 ] 00:14:00.949 }' 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.949 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.209 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.209 10:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.469 [2024-11-19 10:08:15.448131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:01.469 [2024-11-19 10:08:15.678982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:02.037 75.71 IOPS, 227.14 MiB/s [2024-11-19T10:08:16.269Z] 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.037 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.297 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.297 "name": "raid_bdev1", 00:14:02.297 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:02.297 "strip_size_kb": 0, 00:14:02.297 "state": "online", 00:14:02.297 "raid_level": "raid1", 00:14:02.297 "superblock": true, 00:14:02.297 "num_base_bdevs": 2, 00:14:02.297 "num_base_bdevs_discovered": 2, 00:14:02.297 "num_base_bdevs_operational": 2, 00:14:02.297 "process": { 00:14:02.297 "type": "rebuild", 00:14:02.297 "target": "spare", 00:14:02.297 "progress": { 00:14:02.297 "blocks": 59392, 00:14:02.297 "percent": 93 00:14:02.297 } 00:14:02.297 }, 00:14:02.297 "base_bdevs_list": [ 00:14:02.297 { 00:14:02.297 "name": "spare", 00:14:02.297 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:02.297 "is_configured": true, 00:14:02.297 "data_offset": 2048, 00:14:02.297 "data_size": 63488 00:14:02.297 }, 00:14:02.297 { 00:14:02.297 "name": "BaseBdev2", 00:14:02.297 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:02.297 "is_configured": true, 00:14:02.297 "data_offset": 2048, 00:14:02.297 "data_size": 63488 00:14:02.297 } 00:14:02.297 ] 00:14:02.297 }' 00:14:02.297 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.297 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.297 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.297 [2024-11-19 10:08:16.358331] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:02.297 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.297 10:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.297 [2024-11-19 10:08:16.458269] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:02.297 [2024-11-19 10:08:16.461863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.434 69.88 IOPS, 209.62 MiB/s [2024-11-19T10:08:17.666Z] 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.434 "name": "raid_bdev1", 00:14:03.434 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:03.434 "strip_size_kb": 0, 00:14:03.434 "state": "online", 00:14:03.434 "raid_level": "raid1", 00:14:03.434 "superblock": true, 00:14:03.434 "num_base_bdevs": 2, 00:14:03.434 "num_base_bdevs_discovered": 2, 00:14:03.434 "num_base_bdevs_operational": 2, 00:14:03.434 "base_bdevs_list": [ 00:14:03.434 { 00:14:03.434 "name": "spare", 00:14:03.434 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:03.434 "is_configured": true, 00:14:03.434 "data_offset": 2048, 00:14:03.434 "data_size": 63488 00:14:03.434 }, 00:14:03.434 { 00:14:03.434 "name": "BaseBdev2", 00:14:03.434 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:03.434 "is_configured": true, 00:14:03.434 "data_offset": 2048, 00:14:03.434 "data_size": 63488 00:14:03.434 } 00:14:03.434 ] 00:14:03.434 }' 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.434 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.435 "name": "raid_bdev1", 00:14:03.435 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:03.435 "strip_size_kb": 0, 00:14:03.435 "state": "online", 00:14:03.435 "raid_level": "raid1", 00:14:03.435 "superblock": true, 00:14:03.435 "num_base_bdevs": 2, 00:14:03.435 "num_base_bdevs_discovered": 2, 00:14:03.435 "num_base_bdevs_operational": 2, 00:14:03.435 "base_bdevs_list": [ 00:14:03.435 { 00:14:03.435 "name": "spare", 00:14:03.435 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:03.435 "is_configured": true, 00:14:03.435 "data_offset": 2048, 00:14:03.435 "data_size": 63488 00:14:03.435 }, 00:14:03.435 { 00:14:03.435 "name": "BaseBdev2", 00:14:03.435 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:03.435 "is_configured": true, 00:14:03.435 "data_offset": 2048, 00:14:03.435 "data_size": 63488 00:14:03.435 } 00:14:03.435 ] 00:14:03.435 }' 00:14:03.435 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.694 "name": "raid_bdev1", 00:14:03.694 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:03.694 "strip_size_kb": 0, 00:14:03.694 "state": "online", 00:14:03.694 "raid_level": "raid1", 00:14:03.694 "superblock": true, 00:14:03.694 "num_base_bdevs": 2, 00:14:03.694 "num_base_bdevs_discovered": 2, 00:14:03.694 "num_base_bdevs_operational": 2, 00:14:03.694 "base_bdevs_list": [ 00:14:03.694 { 00:14:03.694 "name": "spare", 00:14:03.694 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:03.694 "is_configured": true, 00:14:03.694 "data_offset": 2048, 00:14:03.694 "data_size": 63488 00:14:03.694 }, 00:14:03.694 { 00:14:03.694 "name": "BaseBdev2", 00:14:03.694 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:03.694 "is_configured": true, 00:14:03.694 "data_offset": 2048, 00:14:03.694 "data_size": 63488 00:14:03.694 } 00:14:03.694 ] 00:14:03.694 }' 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.694 10:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.232 65.89 IOPS, 197.67 MiB/s [2024-11-19T10:08:18.464Z] 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.232 [2024-11-19 10:08:18.292212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.232 [2024-11-19 10:08:18.292523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.232 00:14:04.232 Latency(us) 00:14:04.232 [2024-11-19T10:08:18.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.232 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:04.232 raid_bdev1 : 9.36 64.30 192.91 0.00 0.00 21563.21 404.01 131548.63 00:14:04.232 [2024-11-19T10:08:18.464Z] =================================================================================================================== 00:14:04.232 [2024-11-19T10:08:18.464Z] Total : 64.30 192.91 0.00 0.00 21563.21 404.01 131548.63 00:14:04.232 { 00:14:04.232 "results": [ 00:14:04.232 { 00:14:04.232 "job": "raid_bdev1", 00:14:04.232 "core_mask": "0x1", 00:14:04.232 "workload": "randrw", 00:14:04.232 "percentage": 50, 00:14:04.232 "status": "finished", 00:14:04.232 "queue_depth": 2, 00:14:04.232 "io_size": 3145728, 00:14:04.232 "runtime": 9.362102, 00:14:04.232 "iops": 64.3017988908901, 00:14:04.232 "mibps": 192.9053966726703, 00:14:04.232 "io_failed": 0, 00:14:04.232 "io_timeout": 0, 00:14:04.232 "avg_latency_us": 21563.21478707339, 00:14:04.232 "min_latency_us": 404.01454545454544, 00:14:04.232 "max_latency_us": 131548.62545454546 00:14:04.232 } 00:14:04.232 ], 00:14:04.232 "core_count": 1 00:14:04.232 } 00:14:04.232 [2024-11-19 10:08:18.410841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.232 [2024-11-19 10:08:18.410937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.232 [2024-11-19 10:08:18.411077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.232 [2024-11-19 10:08:18.411097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.232 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.492 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:04.752 /dev/nbd0 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.752 1+0 records in 00:14:04.752 1+0 records out 00:14:04.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488782 s, 8.4 MB/s 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.752 10:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:05.011 /dev/nbd1 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.011 1+0 records in 00:14:05.011 1+0 records out 00:14:05.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717517 s, 5.7 MB/s 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.011 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.270 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.529 10:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.098 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.098 [2024-11-19 10:08:20.075642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.098 [2024-11-19 10:08:20.075748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.098 [2024-11-19 10:08:20.075804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:06.098 [2024-11-19 10:08:20.075824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.099 [2024-11-19 10:08:20.079334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.099 [2024-11-19 10:08:20.079428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.099 [2024-11-19 10:08:20.079611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:06.099 [2024-11-19 10:08:20.079689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.099 [2024-11-19 10:08:20.080038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.099 spare 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.099 [2024-11-19 10:08:20.180211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:06.099 [2024-11-19 10:08:20.180637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.099 [2024-11-19 10:08:20.181240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:06.099 [2024-11-19 10:08:20.181705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:06.099 [2024-11-19 10:08:20.181731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:06.099 [2024-11-19 10:08:20.182058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.099 "name": "raid_bdev1", 00:14:06.099 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:06.099 "strip_size_kb": 0, 00:14:06.099 "state": "online", 00:14:06.099 "raid_level": "raid1", 00:14:06.099 "superblock": true, 00:14:06.099 "num_base_bdevs": 2, 00:14:06.099 "num_base_bdevs_discovered": 2, 00:14:06.099 "num_base_bdevs_operational": 2, 00:14:06.099 "base_bdevs_list": [ 00:14:06.099 { 00:14:06.099 "name": "spare", 00:14:06.099 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:06.099 "is_configured": true, 00:14:06.099 "data_offset": 2048, 00:14:06.099 "data_size": 63488 00:14:06.099 }, 00:14:06.099 { 00:14:06.099 "name": "BaseBdev2", 00:14:06.099 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:06.099 "is_configured": true, 00:14:06.099 "data_offset": 2048, 00:14:06.099 "data_size": 63488 00:14:06.099 } 00:14:06.099 ] 00:14:06.099 }' 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.099 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.715 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.716 "name": "raid_bdev1", 00:14:06.716 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:06.716 "strip_size_kb": 0, 00:14:06.716 "state": "online", 00:14:06.716 "raid_level": "raid1", 00:14:06.716 "superblock": true, 00:14:06.716 "num_base_bdevs": 2, 00:14:06.716 "num_base_bdevs_discovered": 2, 00:14:06.716 "num_base_bdevs_operational": 2, 00:14:06.716 "base_bdevs_list": [ 00:14:06.716 { 00:14:06.716 "name": "spare", 00:14:06.716 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:06.716 "is_configured": true, 00:14:06.716 "data_offset": 2048, 00:14:06.716 "data_size": 63488 00:14:06.716 }, 00:14:06.716 { 00:14:06.716 "name": "BaseBdev2", 00:14:06.716 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:06.716 "is_configured": true, 00:14:06.716 "data_offset": 2048, 00:14:06.716 "data_size": 63488 00:14:06.716 } 00:14:06.716 ] 00:14:06.716 }' 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.716 [2024-11-19 10:08:20.900220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.716 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.975 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.975 "name": "raid_bdev1", 00:14:06.975 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:06.975 "strip_size_kb": 0, 00:14:06.975 "state": "online", 00:14:06.975 "raid_level": "raid1", 00:14:06.975 "superblock": true, 00:14:06.975 "num_base_bdevs": 2, 00:14:06.975 "num_base_bdevs_discovered": 1, 00:14:06.975 "num_base_bdevs_operational": 1, 00:14:06.975 "base_bdevs_list": [ 00:14:06.975 { 00:14:06.975 "name": null, 00:14:06.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.975 "is_configured": false, 00:14:06.975 "data_offset": 0, 00:14:06.975 "data_size": 63488 00:14:06.975 }, 00:14:06.975 { 00:14:06.975 "name": "BaseBdev2", 00:14:06.975 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:06.975 "is_configured": true, 00:14:06.976 "data_offset": 2048, 00:14:06.976 "data_size": 63488 00:14:06.976 } 00:14:06.976 ] 00:14:06.976 }' 00:14:06.976 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.976 10:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.234 10:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.234 10:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.234 10:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.234 [2024-11-19 10:08:21.448541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.234 [2024-11-19 10:08:21.448888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.234 [2024-11-19 10:08:21.448920] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:07.234 [2024-11-19 10:08:21.448991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.493 [2024-11-19 10:08:21.466862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:07.493 10:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.493 10:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:07.493 [2024-11-19 10:08:21.470085] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.430 "name": "raid_bdev1", 00:14:08.430 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:08.430 "strip_size_kb": 0, 00:14:08.430 "state": "online", 00:14:08.430 "raid_level": "raid1", 00:14:08.430 "superblock": true, 00:14:08.430 "num_base_bdevs": 2, 00:14:08.430 "num_base_bdevs_discovered": 2, 00:14:08.430 "num_base_bdevs_operational": 2, 00:14:08.430 "process": { 00:14:08.430 "type": "rebuild", 00:14:08.430 "target": "spare", 00:14:08.430 "progress": { 00:14:08.430 "blocks": 20480, 00:14:08.430 "percent": 32 00:14:08.430 } 00:14:08.430 }, 00:14:08.430 "base_bdevs_list": [ 00:14:08.430 { 00:14:08.430 "name": "spare", 00:14:08.430 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:08.430 "is_configured": true, 00:14:08.430 "data_offset": 2048, 00:14:08.430 "data_size": 63488 00:14:08.430 }, 00:14:08.430 { 00:14:08.430 "name": "BaseBdev2", 00:14:08.430 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:08.430 "is_configured": true, 00:14:08.430 "data_offset": 2048, 00:14:08.430 "data_size": 63488 00:14:08.430 } 00:14:08.430 ] 00:14:08.430 }' 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.430 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.430 [2024-11-19 10:08:22.637599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.690 [2024-11-19 10:08:22.682813] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.690 [2024-11-19 10:08:22.682968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.691 [2024-11-19 10:08:22.682997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.691 [2024-11-19 10:08:22.683019] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.691 "name": "raid_bdev1", 00:14:08.691 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:08.691 "strip_size_kb": 0, 00:14:08.691 "state": "online", 00:14:08.691 "raid_level": "raid1", 00:14:08.691 "superblock": true, 00:14:08.691 "num_base_bdevs": 2, 00:14:08.691 "num_base_bdevs_discovered": 1, 00:14:08.691 "num_base_bdevs_operational": 1, 00:14:08.691 "base_bdevs_list": [ 00:14:08.691 { 00:14:08.691 "name": null, 00:14:08.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.691 "is_configured": false, 00:14:08.691 "data_offset": 0, 00:14:08.691 "data_size": 63488 00:14:08.691 }, 00:14:08.691 { 00:14:08.691 "name": "BaseBdev2", 00:14:08.691 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:08.691 "is_configured": true, 00:14:08.691 "data_offset": 2048, 00:14:08.691 "data_size": 63488 00:14:08.691 } 00:14:08.691 ] 00:14:08.691 }' 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.691 10:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.260 10:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:09.260 10:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.260 10:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.260 [2024-11-19 10:08:23.258224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:09.260 [2024-11-19 10:08:23.258378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.260 [2024-11-19 10:08:23.258419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:09.260 [2024-11-19 10:08:23.258439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.260 [2024-11-19 10:08:23.259195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.260 [2024-11-19 10:08:23.259242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:09.260 [2024-11-19 10:08:23.259389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:09.260 [2024-11-19 10:08:23.259424] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.260 [2024-11-19 10:08:23.259440] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:09.260 [2024-11-19 10:08:23.259485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.260 [2024-11-19 10:08:23.277293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:09.260 spare 00:14:09.260 10:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.260 10:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:09.260 [2024-11-19 10:08:23.280460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.220 "name": "raid_bdev1", 00:14:10.220 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:10.220 "strip_size_kb": 0, 00:14:10.220 "state": "online", 00:14:10.220 "raid_level": "raid1", 00:14:10.220 "superblock": true, 00:14:10.220 "num_base_bdevs": 2, 00:14:10.220 "num_base_bdevs_discovered": 2, 00:14:10.220 "num_base_bdevs_operational": 2, 00:14:10.220 "process": { 00:14:10.220 "type": "rebuild", 00:14:10.220 "target": "spare", 00:14:10.220 "progress": { 00:14:10.220 "blocks": 18432, 00:14:10.220 "percent": 29 00:14:10.220 } 00:14:10.220 }, 00:14:10.220 "base_bdevs_list": [ 00:14:10.220 { 00:14:10.220 "name": "spare", 00:14:10.220 "uuid": "be06fc5b-32bc-5201-9d89-3be168567a1e", 00:14:10.220 "is_configured": true, 00:14:10.220 "data_offset": 2048, 00:14:10.220 "data_size": 63488 00:14:10.220 }, 00:14:10.220 { 00:14:10.220 "name": "BaseBdev2", 00:14:10.220 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:10.220 "is_configured": true, 00:14:10.220 "data_offset": 2048, 00:14:10.220 "data_size": 63488 00:14:10.220 } 00:14:10.220 ] 00:14:10.220 }' 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.220 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.479 [2024-11-19 10:08:24.451571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.479 [2024-11-19 10:08:24.493748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:10.479 [2024-11-19 10:08:24.494352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.479 [2024-11-19 10:08:24.494669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.479 [2024-11-19 10:08:24.494885] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.479 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.479 "name": "raid_bdev1", 00:14:10.479 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:10.479 "strip_size_kb": 0, 00:14:10.479 "state": "online", 00:14:10.479 "raid_level": "raid1", 00:14:10.479 "superblock": true, 00:14:10.479 "num_base_bdevs": 2, 00:14:10.479 "num_base_bdevs_discovered": 1, 00:14:10.479 "num_base_bdevs_operational": 1, 00:14:10.479 "base_bdevs_list": [ 00:14:10.479 { 00:14:10.479 "name": null, 00:14:10.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.480 "is_configured": false, 00:14:10.480 "data_offset": 0, 00:14:10.480 "data_size": 63488 00:14:10.480 }, 00:14:10.480 { 00:14:10.480 "name": "BaseBdev2", 00:14:10.480 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:10.480 "is_configured": true, 00:14:10.480 "data_offset": 2048, 00:14:10.480 "data_size": 63488 00:14:10.480 } 00:14:10.480 ] 00:14:10.480 }' 00:14:10.480 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.480 10:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.048 "name": "raid_bdev1", 00:14:11.048 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:11.048 "strip_size_kb": 0, 00:14:11.048 "state": "online", 00:14:11.048 "raid_level": "raid1", 00:14:11.048 "superblock": true, 00:14:11.048 "num_base_bdevs": 2, 00:14:11.048 "num_base_bdevs_discovered": 1, 00:14:11.048 "num_base_bdevs_operational": 1, 00:14:11.048 "base_bdevs_list": [ 00:14:11.048 { 00:14:11.048 "name": null, 00:14:11.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.048 "is_configured": false, 00:14:11.048 "data_offset": 0, 00:14:11.048 "data_size": 63488 00:14:11.048 }, 00:14:11.048 { 00:14:11.048 "name": "BaseBdev2", 00:14:11.048 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:11.048 "is_configured": true, 00:14:11.048 "data_offset": 2048, 00:14:11.048 "data_size": 63488 00:14:11.048 } 00:14:11.048 ] 00:14:11.048 }' 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.048 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.049 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.049 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.049 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.049 [2024-11-19 10:08:25.223682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.049 [2024-11-19 10:08:25.223794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.049 [2024-11-19 10:08:25.223837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:11.049 [2024-11-19 10:08:25.223860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.049 [2024-11-19 10:08:25.224595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.049 [2024-11-19 10:08:25.224635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.049 [2024-11-19 10:08:25.224775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:11.049 [2024-11-19 10:08:25.224821] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:11.049 [2024-11-19 10:08:25.224837] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:11.049 [2024-11-19 10:08:25.224854] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:11.049 BaseBdev1 00:14:11.049 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.049 10:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.427 "name": "raid_bdev1", 00:14:12.427 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:12.427 "strip_size_kb": 0, 00:14:12.427 "state": "online", 00:14:12.427 "raid_level": "raid1", 00:14:12.427 "superblock": true, 00:14:12.427 "num_base_bdevs": 2, 00:14:12.427 "num_base_bdevs_discovered": 1, 00:14:12.427 "num_base_bdevs_operational": 1, 00:14:12.427 "base_bdevs_list": [ 00:14:12.427 { 00:14:12.427 "name": null, 00:14:12.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.427 "is_configured": false, 00:14:12.427 "data_offset": 0, 00:14:12.427 "data_size": 63488 00:14:12.427 }, 00:14:12.427 { 00:14:12.427 "name": "BaseBdev2", 00:14:12.427 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:12.427 "is_configured": true, 00:14:12.427 "data_offset": 2048, 00:14:12.427 "data_size": 63488 00:14:12.427 } 00:14:12.427 ] 00:14:12.427 }' 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.427 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.686 "name": "raid_bdev1", 00:14:12.686 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:12.686 "strip_size_kb": 0, 00:14:12.686 "state": "online", 00:14:12.686 "raid_level": "raid1", 00:14:12.686 "superblock": true, 00:14:12.686 "num_base_bdevs": 2, 00:14:12.686 "num_base_bdevs_discovered": 1, 00:14:12.686 "num_base_bdevs_operational": 1, 00:14:12.686 "base_bdevs_list": [ 00:14:12.686 { 00:14:12.686 "name": null, 00:14:12.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.686 "is_configured": false, 00:14:12.686 "data_offset": 0, 00:14:12.686 "data_size": 63488 00:14:12.686 }, 00:14:12.686 { 00:14:12.686 "name": "BaseBdev2", 00:14:12.686 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:12.686 "is_configured": true, 00:14:12.686 "data_offset": 2048, 00:14:12.686 "data_size": 63488 00:14:12.686 } 00:14:12.686 ] 00:14:12.686 }' 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.686 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.686 [2024-11-19 10:08:26.916841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.686 [2024-11-19 10:08:26.917190] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:12.686 [2024-11-19 10:08:26.917224] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:12.945 request: 00:14:12.945 { 00:14:12.945 "base_bdev": "BaseBdev1", 00:14:12.945 "raid_bdev": "raid_bdev1", 00:14:12.945 "method": "bdev_raid_add_base_bdev", 00:14:12.945 "req_id": 1 00:14:12.945 } 00:14:12.945 Got JSON-RPC error response 00:14:12.945 response: 00:14:12.945 { 00:14:12.945 "code": -22, 00:14:12.945 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:12.945 } 00:14:12.945 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:12.945 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:12.945 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.945 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.945 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.945 10:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.883 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.883 "name": "raid_bdev1", 00:14:13.883 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:13.883 "strip_size_kb": 0, 00:14:13.883 "state": "online", 00:14:13.883 "raid_level": "raid1", 00:14:13.883 "superblock": true, 00:14:13.884 "num_base_bdevs": 2, 00:14:13.884 "num_base_bdevs_discovered": 1, 00:14:13.884 "num_base_bdevs_operational": 1, 00:14:13.884 "base_bdevs_list": [ 00:14:13.884 { 00:14:13.884 "name": null, 00:14:13.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.884 "is_configured": false, 00:14:13.884 "data_offset": 0, 00:14:13.884 "data_size": 63488 00:14:13.884 }, 00:14:13.884 { 00:14:13.884 "name": "BaseBdev2", 00:14:13.884 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:13.884 "is_configured": true, 00:14:13.884 "data_offset": 2048, 00:14:13.884 "data_size": 63488 00:14:13.884 } 00:14:13.884 ] 00:14:13.884 }' 00:14:13.884 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.884 10:08:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.452 "name": "raid_bdev1", 00:14:14.452 "uuid": "547a35a1-b5e0-4917-921b-4e53db675493", 00:14:14.452 "strip_size_kb": 0, 00:14:14.452 "state": "online", 00:14:14.452 "raid_level": "raid1", 00:14:14.452 "superblock": true, 00:14:14.452 "num_base_bdevs": 2, 00:14:14.452 "num_base_bdevs_discovered": 1, 00:14:14.452 "num_base_bdevs_operational": 1, 00:14:14.452 "base_bdevs_list": [ 00:14:14.452 { 00:14:14.452 "name": null, 00:14:14.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.452 "is_configured": false, 00:14:14.452 "data_offset": 0, 00:14:14.452 "data_size": 63488 00:14:14.452 }, 00:14:14.452 { 00:14:14.452 "name": "BaseBdev2", 00:14:14.452 "uuid": "f1c168ca-446c-523d-b993-356af83f0101", 00:14:14.452 "is_configured": true, 00:14:14.452 "data_offset": 2048, 00:14:14.452 "data_size": 63488 00:14:14.452 } 00:14:14.452 ] 00:14:14.452 }' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77011 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77011 ']' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77011 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77011 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.452 killing process with pid 77011 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77011' 00:14:14.452 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77011 00:14:14.452 Received shutdown signal, test time was about 19.627512 seconds 00:14:14.452 00:14:14.452 Latency(us) 00:14:14.452 [2024-11-19T10:08:28.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.452 [2024-11-19T10:08:28.684Z] =================================================================================================================== 00:14:14.452 [2024-11-19T10:08:28.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.453 [2024-11-19 10:08:28.653571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.453 10:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77011 00:14:14.453 [2024-11-19 10:08:28.653863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.453 [2024-11-19 10:08:28.653970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.453 [2024-11-19 10:08:28.654001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:14.711 [2024-11-19 10:08:28.898731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:16.090 00:14:16.090 real 0m23.274s 00:14:16.090 user 0m31.009s 00:14:16.090 sys 0m2.448s 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.090 ************************************ 00:14:16.090 END TEST raid_rebuild_test_sb_io 00:14:16.090 ************************************ 00:14:16.090 10:08:30 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:16.090 10:08:30 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:16.090 10:08:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:16.090 10:08:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.090 10:08:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.090 ************************************ 00:14:16.090 START TEST raid_rebuild_test 00:14:16.090 ************************************ 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77737 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77737 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77737 ']' 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.090 10:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.090 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:16.090 Zero copy mechanism will not be used. 00:14:16.090 [2024-11-19 10:08:30.301666] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:14:16.090 [2024-11-19 10:08:30.301861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77737 ] 00:14:16.350 [2024-11-19 10:08:30.483085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.608 [2024-11-19 10:08:30.637120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.867 [2024-11-19 10:08:30.871756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.867 [2024-11-19 10:08:30.871933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.436 BaseBdev1_malloc 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.436 [2024-11-19 10:08:31.423669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:17.436 [2024-11-19 10:08:31.423835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.436 [2024-11-19 10:08:31.423881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:17.436 [2024-11-19 10:08:31.423903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.436 [2024-11-19 10:08:31.427285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.436 [2024-11-19 10:08:31.427372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.436 BaseBdev1 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.436 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.436 BaseBdev2_malloc 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.437 [2024-11-19 10:08:31.486058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:17.437 [2024-11-19 10:08:31.486240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.437 [2024-11-19 10:08:31.486296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:17.437 [2024-11-19 10:08:31.486330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.437 [2024-11-19 10:08:31.490685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.437 [2024-11-19 10:08:31.490848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:17.437 BaseBdev2 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.437 BaseBdev3_malloc 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.437 [2024-11-19 10:08:31.568404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:17.437 [2024-11-19 10:08:31.568539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.437 [2024-11-19 10:08:31.568597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:17.437 [2024-11-19 10:08:31.568624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.437 [2024-11-19 10:08:31.572861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.437 [2024-11-19 10:08:31.572985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:17.437 BaseBdev3 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.437 BaseBdev4_malloc 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.437 [2024-11-19 10:08:31.634160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:17.437 [2024-11-19 10:08:31.634263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.437 [2024-11-19 10:08:31.634301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:17.437 [2024-11-19 10:08:31.634321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.437 [2024-11-19 10:08:31.637634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.437 [2024-11-19 10:08:31.637722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:17.437 BaseBdev4 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.437 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.703 spare_malloc 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.703 spare_delay 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.703 [2024-11-19 10:08:31.700651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.703 [2024-11-19 10:08:31.700805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.703 [2024-11-19 10:08:31.700852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:17.703 [2024-11-19 10:08:31.700873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.703 [2024-11-19 10:08:31.704272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.703 [2024-11-19 10:08:31.704369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.703 spare 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.703 [2024-11-19 10:08:31.712760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.703 [2024-11-19 10:08:31.715833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.703 [2024-11-19 10:08:31.716024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.703 [2024-11-19 10:08:31.716121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:17.703 [2024-11-19 10:08:31.716290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:17.703 [2024-11-19 10:08:31.716316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:17.703 [2024-11-19 10:08:31.716739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:17.703 [2024-11-19 10:08:31.717068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:17.703 [2024-11-19 10:08:31.717108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:17.703 [2024-11-19 10:08:31.717467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.703 "name": "raid_bdev1", 00:14:17.703 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:17.703 "strip_size_kb": 0, 00:14:17.703 "state": "online", 00:14:17.703 "raid_level": "raid1", 00:14:17.703 "superblock": false, 00:14:17.703 "num_base_bdevs": 4, 00:14:17.703 "num_base_bdevs_discovered": 4, 00:14:17.703 "num_base_bdevs_operational": 4, 00:14:17.703 "base_bdevs_list": [ 00:14:17.703 { 00:14:17.703 "name": "BaseBdev1", 00:14:17.703 "uuid": "4e9645c0-20fb-55c3-9d6d-69df9595eed6", 00:14:17.703 "is_configured": true, 00:14:17.703 "data_offset": 0, 00:14:17.703 "data_size": 65536 00:14:17.703 }, 00:14:17.703 { 00:14:17.703 "name": "BaseBdev2", 00:14:17.703 "uuid": "f05f1379-689e-558b-9261-f3d223a53bea", 00:14:17.703 "is_configured": true, 00:14:17.703 "data_offset": 0, 00:14:17.703 "data_size": 65536 00:14:17.703 }, 00:14:17.703 { 00:14:17.703 "name": "BaseBdev3", 00:14:17.703 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:17.703 "is_configured": true, 00:14:17.703 "data_offset": 0, 00:14:17.703 "data_size": 65536 00:14:17.703 }, 00:14:17.703 { 00:14:17.703 "name": "BaseBdev4", 00:14:17.703 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:17.703 "is_configured": true, 00:14:17.703 "data_offset": 0, 00:14:17.703 "data_size": 65536 00:14:17.703 } 00:14:17.703 ] 00:14:17.703 }' 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.703 10:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:18.270 [2024-11-19 10:08:32.262044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.270 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:18.529 [2024-11-19 10:08:32.637749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:18.529 /dev/nbd0 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.529 1+0 records in 00:14:18.529 1+0 records out 00:14:18.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599503 s, 6.8 MB/s 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:18.529 10:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:28.510 65536+0 records in 00:14:28.510 65536+0 records out 00:14:28.510 33554432 bytes (34 MB, 32 MiB) copied, 9.10858 s, 3.7 MB/s 00:14:28.510 10:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:28.510 10:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.511 10:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:28.511 10:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.511 10:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:28.511 10:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.511 10:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.511 [2024-11-19 10:08:42.152372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 [2024-11-19 10:08:42.196535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.511 "name": "raid_bdev1", 00:14:28.511 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:28.511 "strip_size_kb": 0, 00:14:28.511 "state": "online", 00:14:28.511 "raid_level": "raid1", 00:14:28.511 "superblock": false, 00:14:28.511 "num_base_bdevs": 4, 00:14:28.511 "num_base_bdevs_discovered": 3, 00:14:28.511 "num_base_bdevs_operational": 3, 00:14:28.511 "base_bdevs_list": [ 00:14:28.511 { 00:14:28.511 "name": null, 00:14:28.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.511 "is_configured": false, 00:14:28.511 "data_offset": 0, 00:14:28.511 "data_size": 65536 00:14:28.511 }, 00:14:28.511 { 00:14:28.511 "name": "BaseBdev2", 00:14:28.511 "uuid": "f05f1379-689e-558b-9261-f3d223a53bea", 00:14:28.511 "is_configured": true, 00:14:28.511 "data_offset": 0, 00:14:28.511 "data_size": 65536 00:14:28.511 }, 00:14:28.511 { 00:14:28.511 "name": "BaseBdev3", 00:14:28.511 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:28.511 "is_configured": true, 00:14:28.511 "data_offset": 0, 00:14:28.511 "data_size": 65536 00:14:28.511 }, 00:14:28.511 { 00:14:28.511 "name": "BaseBdev4", 00:14:28.511 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:28.511 "is_configured": true, 00:14:28.511 "data_offset": 0, 00:14:28.511 "data_size": 65536 00:14:28.511 } 00:14:28.511 ] 00:14:28.511 }' 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 [2024-11-19 10:08:42.716661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.511 [2024-11-19 10:08:42.731887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.511 10:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:28.511 [2024-11-19 10:08:42.734763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.558 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.817 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.817 "name": "raid_bdev1", 00:14:29.817 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:29.817 "strip_size_kb": 0, 00:14:29.817 "state": "online", 00:14:29.817 "raid_level": "raid1", 00:14:29.817 "superblock": false, 00:14:29.817 "num_base_bdevs": 4, 00:14:29.817 "num_base_bdevs_discovered": 4, 00:14:29.817 "num_base_bdevs_operational": 4, 00:14:29.817 "process": { 00:14:29.817 "type": "rebuild", 00:14:29.817 "target": "spare", 00:14:29.817 "progress": { 00:14:29.817 "blocks": 18432, 00:14:29.817 "percent": 28 00:14:29.817 } 00:14:29.817 }, 00:14:29.817 "base_bdevs_list": [ 00:14:29.817 { 00:14:29.817 "name": "spare", 00:14:29.817 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:29.817 "is_configured": true, 00:14:29.817 "data_offset": 0, 00:14:29.817 "data_size": 65536 00:14:29.817 }, 00:14:29.817 { 00:14:29.817 "name": "BaseBdev2", 00:14:29.817 "uuid": "f05f1379-689e-558b-9261-f3d223a53bea", 00:14:29.817 "is_configured": true, 00:14:29.817 "data_offset": 0, 00:14:29.817 "data_size": 65536 00:14:29.817 }, 00:14:29.817 { 00:14:29.817 "name": "BaseBdev3", 00:14:29.817 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:29.818 "is_configured": true, 00:14:29.818 "data_offset": 0, 00:14:29.818 "data_size": 65536 00:14:29.818 }, 00:14:29.818 { 00:14:29.818 "name": "BaseBdev4", 00:14:29.818 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:29.818 "is_configured": true, 00:14:29.818 "data_offset": 0, 00:14:29.818 "data_size": 65536 00:14:29.818 } 00:14:29.818 ] 00:14:29.818 }' 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.818 [2024-11-19 10:08:43.917600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.818 [2024-11-19 10:08:43.948028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.818 [2024-11-19 10:08:43.948171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.818 [2024-11-19 10:08:43.948203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.818 [2024-11-19 10:08:43.948219] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.818 10:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.818 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.818 "name": "raid_bdev1", 00:14:29.818 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:29.818 "strip_size_kb": 0, 00:14:29.818 "state": "online", 00:14:29.818 "raid_level": "raid1", 00:14:29.818 "superblock": false, 00:14:29.818 "num_base_bdevs": 4, 00:14:29.818 "num_base_bdevs_discovered": 3, 00:14:29.818 "num_base_bdevs_operational": 3, 00:14:29.818 "base_bdevs_list": [ 00:14:29.818 { 00:14:29.818 "name": null, 00:14:29.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.818 "is_configured": false, 00:14:29.818 "data_offset": 0, 00:14:29.818 "data_size": 65536 00:14:29.818 }, 00:14:29.818 { 00:14:29.818 "name": "BaseBdev2", 00:14:29.818 "uuid": "f05f1379-689e-558b-9261-f3d223a53bea", 00:14:29.818 "is_configured": true, 00:14:29.818 "data_offset": 0, 00:14:29.818 "data_size": 65536 00:14:29.818 }, 00:14:29.818 { 00:14:29.818 "name": "BaseBdev3", 00:14:29.818 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:29.818 "is_configured": true, 00:14:29.818 "data_offset": 0, 00:14:29.818 "data_size": 65536 00:14:29.818 }, 00:14:29.818 { 00:14:29.818 "name": "BaseBdev4", 00:14:29.818 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:29.818 "is_configured": true, 00:14:29.818 "data_offset": 0, 00:14:29.818 "data_size": 65536 00:14:29.818 } 00:14:29.818 ] 00:14:29.818 }' 00:14:29.818 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.818 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.386 "name": "raid_bdev1", 00:14:30.386 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:30.386 "strip_size_kb": 0, 00:14:30.386 "state": "online", 00:14:30.386 "raid_level": "raid1", 00:14:30.386 "superblock": false, 00:14:30.386 "num_base_bdevs": 4, 00:14:30.386 "num_base_bdevs_discovered": 3, 00:14:30.386 "num_base_bdevs_operational": 3, 00:14:30.386 "base_bdevs_list": [ 00:14:30.386 { 00:14:30.386 "name": null, 00:14:30.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.386 "is_configured": false, 00:14:30.386 "data_offset": 0, 00:14:30.386 "data_size": 65536 00:14:30.386 }, 00:14:30.386 { 00:14:30.386 "name": "BaseBdev2", 00:14:30.386 "uuid": "f05f1379-689e-558b-9261-f3d223a53bea", 00:14:30.386 "is_configured": true, 00:14:30.386 "data_offset": 0, 00:14:30.386 "data_size": 65536 00:14:30.386 }, 00:14:30.386 { 00:14:30.386 "name": "BaseBdev3", 00:14:30.386 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:30.386 "is_configured": true, 00:14:30.386 "data_offset": 0, 00:14:30.386 "data_size": 65536 00:14:30.386 }, 00:14:30.386 { 00:14:30.386 "name": "BaseBdev4", 00:14:30.386 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:30.386 "is_configured": true, 00:14:30.386 "data_offset": 0, 00:14:30.386 "data_size": 65536 00:14:30.386 } 00:14:30.386 ] 00:14:30.386 }' 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.386 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.646 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.646 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.646 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.646 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.646 [2024-11-19 10:08:44.646911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.646 [2024-11-19 10:08:44.661261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:30.646 10:08:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.646 10:08:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:30.646 [2024-11-19 10:08:44.664226] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.583 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.584 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.584 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.584 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.584 "name": "raid_bdev1", 00:14:31.584 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:31.584 "strip_size_kb": 0, 00:14:31.584 "state": "online", 00:14:31.584 "raid_level": "raid1", 00:14:31.584 "superblock": false, 00:14:31.584 "num_base_bdevs": 4, 00:14:31.584 "num_base_bdevs_discovered": 4, 00:14:31.584 "num_base_bdevs_operational": 4, 00:14:31.584 "process": { 00:14:31.584 "type": "rebuild", 00:14:31.584 "target": "spare", 00:14:31.584 "progress": { 00:14:31.584 "blocks": 20480, 00:14:31.584 "percent": 31 00:14:31.584 } 00:14:31.584 }, 00:14:31.584 "base_bdevs_list": [ 00:14:31.584 { 00:14:31.584 "name": "spare", 00:14:31.584 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:31.584 "is_configured": true, 00:14:31.584 "data_offset": 0, 00:14:31.584 "data_size": 65536 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "name": "BaseBdev2", 00:14:31.584 "uuid": "f05f1379-689e-558b-9261-f3d223a53bea", 00:14:31.584 "is_configured": true, 00:14:31.584 "data_offset": 0, 00:14:31.584 "data_size": 65536 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "name": "BaseBdev3", 00:14:31.584 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:31.584 "is_configured": true, 00:14:31.584 "data_offset": 0, 00:14:31.584 "data_size": 65536 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "name": "BaseBdev4", 00:14:31.584 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:31.584 "is_configured": true, 00:14:31.584 "data_offset": 0, 00:14:31.584 "data_size": 65536 00:14:31.584 } 00:14:31.584 ] 00:14:31.584 }' 00:14:31.584 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.584 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.584 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.843 [2024-11-19 10:08:45.835166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.843 [2024-11-19 10:08:45.876925] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.843 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.843 "name": "raid_bdev1", 00:14:31.843 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:31.843 "strip_size_kb": 0, 00:14:31.843 "state": "online", 00:14:31.843 "raid_level": "raid1", 00:14:31.843 "superblock": false, 00:14:31.843 "num_base_bdevs": 4, 00:14:31.843 "num_base_bdevs_discovered": 3, 00:14:31.843 "num_base_bdevs_operational": 3, 00:14:31.843 "process": { 00:14:31.843 "type": "rebuild", 00:14:31.843 "target": "spare", 00:14:31.843 "progress": { 00:14:31.843 "blocks": 24576, 00:14:31.843 "percent": 37 00:14:31.843 } 00:14:31.843 }, 00:14:31.843 "base_bdevs_list": [ 00:14:31.843 { 00:14:31.843 "name": "spare", 00:14:31.843 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:31.843 "is_configured": true, 00:14:31.844 "data_offset": 0, 00:14:31.844 "data_size": 65536 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "name": null, 00:14:31.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.844 "is_configured": false, 00:14:31.844 "data_offset": 0, 00:14:31.844 "data_size": 65536 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "name": "BaseBdev3", 00:14:31.844 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:31.844 "is_configured": true, 00:14:31.844 "data_offset": 0, 00:14:31.844 "data_size": 65536 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "name": "BaseBdev4", 00:14:31.844 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:31.844 "is_configured": true, 00:14:31.844 "data_offset": 0, 00:14:31.844 "data_size": 65536 00:14:31.844 } 00:14:31.844 ] 00:14:31.844 }' 00:14:31.844 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.844 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.844 10:08:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.844 10:08:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.103 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.103 "name": "raid_bdev1", 00:14:32.103 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:32.103 "strip_size_kb": 0, 00:14:32.103 "state": "online", 00:14:32.103 "raid_level": "raid1", 00:14:32.103 "superblock": false, 00:14:32.103 "num_base_bdevs": 4, 00:14:32.103 "num_base_bdevs_discovered": 3, 00:14:32.103 "num_base_bdevs_operational": 3, 00:14:32.103 "process": { 00:14:32.103 "type": "rebuild", 00:14:32.103 "target": "spare", 00:14:32.103 "progress": { 00:14:32.103 "blocks": 26624, 00:14:32.103 "percent": 40 00:14:32.103 } 00:14:32.103 }, 00:14:32.103 "base_bdevs_list": [ 00:14:32.103 { 00:14:32.103 "name": "spare", 00:14:32.103 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:32.103 "is_configured": true, 00:14:32.103 "data_offset": 0, 00:14:32.103 "data_size": 65536 00:14:32.103 }, 00:14:32.103 { 00:14:32.103 "name": null, 00:14:32.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.103 "is_configured": false, 00:14:32.103 "data_offset": 0, 00:14:32.103 "data_size": 65536 00:14:32.103 }, 00:14:32.103 { 00:14:32.103 "name": "BaseBdev3", 00:14:32.103 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:32.103 "is_configured": true, 00:14:32.103 "data_offset": 0, 00:14:32.103 "data_size": 65536 00:14:32.103 }, 00:14:32.103 { 00:14:32.103 "name": "BaseBdev4", 00:14:32.103 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:32.103 "is_configured": true, 00:14:32.103 "data_offset": 0, 00:14:32.103 "data_size": 65536 00:14:32.103 } 00:14:32.103 ] 00:14:32.103 }' 00:14:32.103 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.103 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.103 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.103 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.103 10:08:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.040 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.040 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.040 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.041 "name": "raid_bdev1", 00:14:33.041 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:33.041 "strip_size_kb": 0, 00:14:33.041 "state": "online", 00:14:33.041 "raid_level": "raid1", 00:14:33.041 "superblock": false, 00:14:33.041 "num_base_bdevs": 4, 00:14:33.041 "num_base_bdevs_discovered": 3, 00:14:33.041 "num_base_bdevs_operational": 3, 00:14:33.041 "process": { 00:14:33.041 "type": "rebuild", 00:14:33.041 "target": "spare", 00:14:33.041 "progress": { 00:14:33.041 "blocks": 51200, 00:14:33.041 "percent": 78 00:14:33.041 } 00:14:33.041 }, 00:14:33.041 "base_bdevs_list": [ 00:14:33.041 { 00:14:33.041 "name": "spare", 00:14:33.041 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:33.041 "is_configured": true, 00:14:33.041 "data_offset": 0, 00:14:33.041 "data_size": 65536 00:14:33.041 }, 00:14:33.041 { 00:14:33.041 "name": null, 00:14:33.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.041 "is_configured": false, 00:14:33.041 "data_offset": 0, 00:14:33.041 "data_size": 65536 00:14:33.041 }, 00:14:33.041 { 00:14:33.041 "name": "BaseBdev3", 00:14:33.041 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:33.041 "is_configured": true, 00:14:33.041 "data_offset": 0, 00:14:33.041 "data_size": 65536 00:14:33.041 }, 00:14:33.041 { 00:14:33.041 "name": "BaseBdev4", 00:14:33.041 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:33.041 "is_configured": true, 00:14:33.041 "data_offset": 0, 00:14:33.041 "data_size": 65536 00:14:33.041 } 00:14:33.041 ] 00:14:33.041 }' 00:14:33.041 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.300 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.300 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.300 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.300 10:08:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.869 [2024-11-19 10:08:47.897827] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:33.869 [2024-11-19 10:08:47.897993] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:33.869 [2024-11-19 10:08:47.898070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.436 "name": "raid_bdev1", 00:14:34.436 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:34.436 "strip_size_kb": 0, 00:14:34.436 "state": "online", 00:14:34.436 "raid_level": "raid1", 00:14:34.436 "superblock": false, 00:14:34.436 "num_base_bdevs": 4, 00:14:34.436 "num_base_bdevs_discovered": 3, 00:14:34.436 "num_base_bdevs_operational": 3, 00:14:34.436 "base_bdevs_list": [ 00:14:34.436 { 00:14:34.436 "name": "spare", 00:14:34.436 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:34.436 "is_configured": true, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 }, 00:14:34.436 { 00:14:34.436 "name": null, 00:14:34.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.436 "is_configured": false, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 }, 00:14:34.436 { 00:14:34.436 "name": "BaseBdev3", 00:14:34.436 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:34.436 "is_configured": true, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 }, 00:14:34.436 { 00:14:34.436 "name": "BaseBdev4", 00:14:34.436 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:34.436 "is_configured": true, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 } 00:14:34.436 ] 00:14:34.436 }' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.436 "name": "raid_bdev1", 00:14:34.436 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:34.436 "strip_size_kb": 0, 00:14:34.436 "state": "online", 00:14:34.436 "raid_level": "raid1", 00:14:34.436 "superblock": false, 00:14:34.436 "num_base_bdevs": 4, 00:14:34.436 "num_base_bdevs_discovered": 3, 00:14:34.436 "num_base_bdevs_operational": 3, 00:14:34.436 "base_bdevs_list": [ 00:14:34.436 { 00:14:34.436 "name": "spare", 00:14:34.436 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:34.436 "is_configured": true, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 }, 00:14:34.436 { 00:14:34.436 "name": null, 00:14:34.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.436 "is_configured": false, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 }, 00:14:34.436 { 00:14:34.436 "name": "BaseBdev3", 00:14:34.436 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:34.436 "is_configured": true, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 }, 00:14:34.436 { 00:14:34.436 "name": "BaseBdev4", 00:14:34.436 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:34.436 "is_configured": true, 00:14:34.436 "data_offset": 0, 00:14:34.436 "data_size": 65536 00:14:34.436 } 00:14:34.436 ] 00:14:34.436 }' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.436 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.696 "name": "raid_bdev1", 00:14:34.696 "uuid": "ba88b547-05da-4f9d-9e36-08706e7eeff7", 00:14:34.696 "strip_size_kb": 0, 00:14:34.696 "state": "online", 00:14:34.696 "raid_level": "raid1", 00:14:34.696 "superblock": false, 00:14:34.696 "num_base_bdevs": 4, 00:14:34.696 "num_base_bdevs_discovered": 3, 00:14:34.696 "num_base_bdevs_operational": 3, 00:14:34.696 "base_bdevs_list": [ 00:14:34.696 { 00:14:34.696 "name": "spare", 00:14:34.696 "uuid": "68eba6b8-4b44-5bf0-ac62-a94f4d0e0ba2", 00:14:34.696 "is_configured": true, 00:14:34.696 "data_offset": 0, 00:14:34.696 "data_size": 65536 00:14:34.696 }, 00:14:34.696 { 00:14:34.696 "name": null, 00:14:34.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.696 "is_configured": false, 00:14:34.696 "data_offset": 0, 00:14:34.696 "data_size": 65536 00:14:34.696 }, 00:14:34.696 { 00:14:34.696 "name": "BaseBdev3", 00:14:34.696 "uuid": "5a3b180d-5333-5f8a-9e50-49a1772ef05a", 00:14:34.696 "is_configured": true, 00:14:34.696 "data_offset": 0, 00:14:34.696 "data_size": 65536 00:14:34.696 }, 00:14:34.696 { 00:14:34.696 "name": "BaseBdev4", 00:14:34.696 "uuid": "69702b0f-9b2d-5176-99a8-419e47139136", 00:14:34.696 "is_configured": true, 00:14:34.696 "data_offset": 0, 00:14:34.696 "data_size": 65536 00:14:34.696 } 00:14:34.696 ] 00:14:34.696 }' 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.696 10:08:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.262 [2024-11-19 10:08:49.260815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.262 [2024-11-19 10:08:49.260881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.262 [2024-11-19 10:08:49.261013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.262 [2024-11-19 10:08:49.261147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.262 [2024-11-19 10:08:49.261175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.262 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:35.520 /dev/nbd0 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.520 1+0 records in 00:14:35.520 1+0 records out 00:14:35.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047729 s, 8.6 MB/s 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.520 10:08:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:35.779 /dev/nbd1 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.039 1+0 records in 00:14:36.039 1+0 records out 00:14:36.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615843 s, 6.7 MB/s 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.039 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.613 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.613 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.613 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.613 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.613 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.613 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.614 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:36.614 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.614 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.614 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77737 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77737 ']' 00:14:36.895 10:08:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77737 00:14:36.895 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:36.895 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.895 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77737 00:14:36.895 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.895 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.896 killing process with pid 77737 00:14:36.896 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77737' 00:14:36.896 Received shutdown signal, test time was about 60.000000 seconds 00:14:36.896 00:14:36.896 Latency(us) 00:14:36.896 [2024-11-19T10:08:51.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.896 [2024-11-19T10:08:51.128Z] =================================================================================================================== 00:14:36.896 [2024-11-19T10:08:51.128Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:36.896 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77737 00:14:36.896 10:08:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77737 00:14:36.896 [2024-11-19 10:08:51.031288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.464 [2024-11-19 10:08:51.514576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:38.842 00:14:38.842 real 0m22.469s 00:14:38.842 user 0m25.104s 00:14:38.842 sys 0m4.042s 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.842 ************************************ 00:14:38.842 END TEST raid_rebuild_test 00:14:38.842 ************************************ 00:14:38.842 10:08:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:38.842 10:08:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:38.842 10:08:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.842 10:08:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.842 ************************************ 00:14:38.842 START TEST raid_rebuild_test_sb 00:14:38.842 ************************************ 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78230 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78230 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78230 ']' 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.842 10:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.842 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:38.842 Zero copy mechanism will not be used. 00:14:38.842 [2024-11-19 10:08:52.859098] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:14:38.842 [2024-11-19 10:08:52.859305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78230 ] 00:14:38.842 [2024-11-19 10:08:53.043403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.101 [2024-11-19 10:08:53.194161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.360 [2024-11-19 10:08:53.425673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.360 [2024-11-19 10:08:53.425799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 BaseBdev1_malloc 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 [2024-11-19 10:08:53.995509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:39.927 [2024-11-19 10:08:53.995663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.927 [2024-11-19 10:08:53.995709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:39.927 [2024-11-19 10:08:53.995731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.927 [2024-11-19 10:08:53.999442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.927 [2024-11-19 10:08:53.999530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:39.927 BaseBdev1 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 BaseBdev2_malloc 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 [2024-11-19 10:08:54.057989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:39.927 [2024-11-19 10:08:54.058130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.927 [2024-11-19 10:08:54.058170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:39.927 [2024-11-19 10:08:54.058195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.927 [2024-11-19 10:08:54.061552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.927 [2024-11-19 10:08:54.061645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.927 BaseBdev2 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 BaseBdev3_malloc 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 [2024-11-19 10:08:54.126532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:39.927 [2024-11-19 10:08:54.126663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.927 [2024-11-19 10:08:54.126707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:39.927 [2024-11-19 10:08:54.126727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.927 [2024-11-19 10:08:54.130095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.927 [2024-11-19 10:08:54.130193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:39.927 BaseBdev3 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.927 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 BaseBdev4_malloc 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 [2024-11-19 10:08:54.187710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:40.187 [2024-11-19 10:08:54.187855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.187 [2024-11-19 10:08:54.187896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:40.187 [2024-11-19 10:08:54.187916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.187 [2024-11-19 10:08:54.191248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.187 [2024-11-19 10:08:54.191344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:40.187 BaseBdev4 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 spare_malloc 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 spare_delay 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 [2024-11-19 10:08:54.260878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.187 [2024-11-19 10:08:54.261008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.187 [2024-11-19 10:08:54.261053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:40.187 [2024-11-19 10:08:54.261074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.187 [2024-11-19 10:08:54.264438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.187 [2024-11-19 10:08:54.264524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.187 spare 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 [2024-11-19 10:08:54.273057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.187 [2024-11-19 10:08:54.276016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.187 [2024-11-19 10:08:54.276176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.187 [2024-11-19 10:08:54.276268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.187 [2024-11-19 10:08:54.276589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:40.187 [2024-11-19 10:08:54.276626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:40.187 [2024-11-19 10:08:54.277066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:40.187 [2024-11-19 10:08:54.277356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:40.187 [2024-11-19 10:08:54.277384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:40.187 [2024-11-19 10:08:54.277741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.187 "name": "raid_bdev1", 00:14:40.187 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:40.187 "strip_size_kb": 0, 00:14:40.187 "state": "online", 00:14:40.187 "raid_level": "raid1", 00:14:40.187 "superblock": true, 00:14:40.187 "num_base_bdevs": 4, 00:14:40.187 "num_base_bdevs_discovered": 4, 00:14:40.187 "num_base_bdevs_operational": 4, 00:14:40.187 "base_bdevs_list": [ 00:14:40.187 { 00:14:40.187 "name": "BaseBdev1", 00:14:40.187 "uuid": "08f3768e-b6f6-5447-a491-5d24d9bf8983", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 2048, 00:14:40.187 "data_size": 63488 00:14:40.187 }, 00:14:40.187 { 00:14:40.187 "name": "BaseBdev2", 00:14:40.187 "uuid": "ce1ee9cb-face-5536-9bf8-b19218ee6e4f", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 2048, 00:14:40.187 "data_size": 63488 00:14:40.187 }, 00:14:40.187 { 00:14:40.187 "name": "BaseBdev3", 00:14:40.187 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 2048, 00:14:40.187 "data_size": 63488 00:14:40.187 }, 00:14:40.187 { 00:14:40.187 "name": "BaseBdev4", 00:14:40.187 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 2048, 00:14:40.187 "data_size": 63488 00:14:40.187 } 00:14:40.187 ] 00:14:40.187 }' 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.187 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.754 [2024-11-19 10:08:54.862355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.754 10:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:41.326 [2024-11-19 10:08:55.250039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:41.326 /dev/nbd0 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.326 1+0 records in 00:14:41.326 1+0 records out 00:14:41.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549729 s, 7.5 MB/s 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:41.326 10:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:49.440 63488+0 records in 00:14:49.440 63488+0 records out 00:14:49.440 32505856 bytes (33 MB, 31 MiB) copied, 8.20229 s, 4.0 MB/s 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.440 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:49.699 [2024-11-19 10:09:03.868193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.699 [2024-11-19 10:09:03.916345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.699 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.958 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.958 "name": "raid_bdev1", 00:14:49.958 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:49.958 "strip_size_kb": 0, 00:14:49.958 "state": "online", 00:14:49.958 "raid_level": "raid1", 00:14:49.958 "superblock": true, 00:14:49.958 "num_base_bdevs": 4, 00:14:49.958 "num_base_bdevs_discovered": 3, 00:14:49.958 "num_base_bdevs_operational": 3, 00:14:49.958 "base_bdevs_list": [ 00:14:49.958 { 00:14:49.958 "name": null, 00:14:49.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.958 "is_configured": false, 00:14:49.958 "data_offset": 0, 00:14:49.958 "data_size": 63488 00:14:49.958 }, 00:14:49.958 { 00:14:49.958 "name": "BaseBdev2", 00:14:49.958 "uuid": "ce1ee9cb-face-5536-9bf8-b19218ee6e4f", 00:14:49.958 "is_configured": true, 00:14:49.958 "data_offset": 2048, 00:14:49.958 "data_size": 63488 00:14:49.958 }, 00:14:49.958 { 00:14:49.958 "name": "BaseBdev3", 00:14:49.958 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:49.958 "is_configured": true, 00:14:49.958 "data_offset": 2048, 00:14:49.958 "data_size": 63488 00:14:49.958 }, 00:14:49.958 { 00:14:49.958 "name": "BaseBdev4", 00:14:49.958 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:49.958 "is_configured": true, 00:14:49.958 "data_offset": 2048, 00:14:49.958 "data_size": 63488 00:14:49.958 } 00:14:49.958 ] 00:14:49.958 }' 00:14:49.958 10:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.958 10:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.217 10:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.217 10:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.217 10:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.477 [2024-11-19 10:09:04.456504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.477 [2024-11-19 10:09:04.471616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:50.477 10:09:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.477 10:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:50.477 [2024-11-19 10:09:04.474448] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.416 "name": "raid_bdev1", 00:14:51.416 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:51.416 "strip_size_kb": 0, 00:14:51.416 "state": "online", 00:14:51.416 "raid_level": "raid1", 00:14:51.416 "superblock": true, 00:14:51.416 "num_base_bdevs": 4, 00:14:51.416 "num_base_bdevs_discovered": 4, 00:14:51.416 "num_base_bdevs_operational": 4, 00:14:51.416 "process": { 00:14:51.416 "type": "rebuild", 00:14:51.416 "target": "spare", 00:14:51.416 "progress": { 00:14:51.416 "blocks": 20480, 00:14:51.416 "percent": 32 00:14:51.416 } 00:14:51.416 }, 00:14:51.416 "base_bdevs_list": [ 00:14:51.416 { 00:14:51.416 "name": "spare", 00:14:51.416 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:51.416 "is_configured": true, 00:14:51.416 "data_offset": 2048, 00:14:51.416 "data_size": 63488 00:14:51.416 }, 00:14:51.416 { 00:14:51.416 "name": "BaseBdev2", 00:14:51.416 "uuid": "ce1ee9cb-face-5536-9bf8-b19218ee6e4f", 00:14:51.416 "is_configured": true, 00:14:51.416 "data_offset": 2048, 00:14:51.416 "data_size": 63488 00:14:51.416 }, 00:14:51.416 { 00:14:51.416 "name": "BaseBdev3", 00:14:51.416 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:51.416 "is_configured": true, 00:14:51.416 "data_offset": 2048, 00:14:51.416 "data_size": 63488 00:14:51.416 }, 00:14:51.416 { 00:14:51.416 "name": "BaseBdev4", 00:14:51.416 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:51.416 "is_configured": true, 00:14:51.416 "data_offset": 2048, 00:14:51.416 "data_size": 63488 00:14:51.416 } 00:14:51.416 ] 00:14:51.416 }' 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.416 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.676 [2024-11-19 10:09:05.657439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.676 [2024-11-19 10:09:05.686278] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.676 [2024-11-19 10:09:05.686413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.676 [2024-11-19 10:09:05.686444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.676 [2024-11-19 10:09:05.686461] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.676 "name": "raid_bdev1", 00:14:51.676 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:51.676 "strip_size_kb": 0, 00:14:51.676 "state": "online", 00:14:51.676 "raid_level": "raid1", 00:14:51.676 "superblock": true, 00:14:51.676 "num_base_bdevs": 4, 00:14:51.676 "num_base_bdevs_discovered": 3, 00:14:51.676 "num_base_bdevs_operational": 3, 00:14:51.676 "base_bdevs_list": [ 00:14:51.676 { 00:14:51.676 "name": null, 00:14:51.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.676 "is_configured": false, 00:14:51.676 "data_offset": 0, 00:14:51.676 "data_size": 63488 00:14:51.676 }, 00:14:51.676 { 00:14:51.676 "name": "BaseBdev2", 00:14:51.676 "uuid": "ce1ee9cb-face-5536-9bf8-b19218ee6e4f", 00:14:51.676 "is_configured": true, 00:14:51.676 "data_offset": 2048, 00:14:51.676 "data_size": 63488 00:14:51.676 }, 00:14:51.676 { 00:14:51.676 "name": "BaseBdev3", 00:14:51.676 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:51.676 "is_configured": true, 00:14:51.676 "data_offset": 2048, 00:14:51.676 "data_size": 63488 00:14:51.676 }, 00:14:51.676 { 00:14:51.676 "name": "BaseBdev4", 00:14:51.676 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:51.676 "is_configured": true, 00:14:51.676 "data_offset": 2048, 00:14:51.676 "data_size": 63488 00:14:51.676 } 00:14:51.676 ] 00:14:51.676 }' 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.676 10:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.243 "name": "raid_bdev1", 00:14:52.243 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:52.243 "strip_size_kb": 0, 00:14:52.243 "state": "online", 00:14:52.243 "raid_level": "raid1", 00:14:52.243 "superblock": true, 00:14:52.243 "num_base_bdevs": 4, 00:14:52.243 "num_base_bdevs_discovered": 3, 00:14:52.243 "num_base_bdevs_operational": 3, 00:14:52.243 "base_bdevs_list": [ 00:14:52.243 { 00:14:52.243 "name": null, 00:14:52.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.243 "is_configured": false, 00:14:52.243 "data_offset": 0, 00:14:52.243 "data_size": 63488 00:14:52.243 }, 00:14:52.243 { 00:14:52.243 "name": "BaseBdev2", 00:14:52.243 "uuid": "ce1ee9cb-face-5536-9bf8-b19218ee6e4f", 00:14:52.243 "is_configured": true, 00:14:52.243 "data_offset": 2048, 00:14:52.243 "data_size": 63488 00:14:52.243 }, 00:14:52.243 { 00:14:52.243 "name": "BaseBdev3", 00:14:52.243 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:52.243 "is_configured": true, 00:14:52.243 "data_offset": 2048, 00:14:52.243 "data_size": 63488 00:14:52.243 }, 00:14:52.243 { 00:14:52.243 "name": "BaseBdev4", 00:14:52.243 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:52.243 "is_configured": true, 00:14:52.243 "data_offset": 2048, 00:14:52.243 "data_size": 63488 00:14:52.243 } 00:14:52.243 ] 00:14:52.243 }' 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.243 [2024-11-19 10:09:06.384361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.243 [2024-11-19 10:09:06.398345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.243 10:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:52.243 [2024-11-19 10:09:06.401277] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.178 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.178 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.178 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.178 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.178 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.436 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.436 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.436 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.436 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.436 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.436 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.436 "name": "raid_bdev1", 00:14:53.436 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:53.436 "strip_size_kb": 0, 00:14:53.436 "state": "online", 00:14:53.436 "raid_level": "raid1", 00:14:53.436 "superblock": true, 00:14:53.436 "num_base_bdevs": 4, 00:14:53.436 "num_base_bdevs_discovered": 4, 00:14:53.436 "num_base_bdevs_operational": 4, 00:14:53.436 "process": { 00:14:53.436 "type": "rebuild", 00:14:53.436 "target": "spare", 00:14:53.436 "progress": { 00:14:53.436 "blocks": 20480, 00:14:53.436 "percent": 32 00:14:53.436 } 00:14:53.436 }, 00:14:53.436 "base_bdevs_list": [ 00:14:53.436 { 00:14:53.436 "name": "spare", 00:14:53.436 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:53.436 "is_configured": true, 00:14:53.436 "data_offset": 2048, 00:14:53.436 "data_size": 63488 00:14:53.437 }, 00:14:53.437 { 00:14:53.437 "name": "BaseBdev2", 00:14:53.437 "uuid": "ce1ee9cb-face-5536-9bf8-b19218ee6e4f", 00:14:53.437 "is_configured": true, 00:14:53.437 "data_offset": 2048, 00:14:53.437 "data_size": 63488 00:14:53.437 }, 00:14:53.437 { 00:14:53.437 "name": "BaseBdev3", 00:14:53.437 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:53.437 "is_configured": true, 00:14:53.437 "data_offset": 2048, 00:14:53.437 "data_size": 63488 00:14:53.437 }, 00:14:53.437 { 00:14:53.437 "name": "BaseBdev4", 00:14:53.437 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:53.437 "is_configured": true, 00:14:53.437 "data_offset": 2048, 00:14:53.437 "data_size": 63488 00:14:53.437 } 00:14:53.437 ] 00:14:53.437 }' 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:53.437 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.437 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.437 [2024-11-19 10:09:07.567029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.696 [2024-11-19 10:09:07.712940] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.696 "name": "raid_bdev1", 00:14:53.696 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:53.696 "strip_size_kb": 0, 00:14:53.696 "state": "online", 00:14:53.696 "raid_level": "raid1", 00:14:53.696 "superblock": true, 00:14:53.696 "num_base_bdevs": 4, 00:14:53.696 "num_base_bdevs_discovered": 3, 00:14:53.696 "num_base_bdevs_operational": 3, 00:14:53.696 "process": { 00:14:53.696 "type": "rebuild", 00:14:53.696 "target": "spare", 00:14:53.696 "progress": { 00:14:53.696 "blocks": 24576, 00:14:53.696 "percent": 38 00:14:53.696 } 00:14:53.696 }, 00:14:53.696 "base_bdevs_list": [ 00:14:53.696 { 00:14:53.696 "name": "spare", 00:14:53.696 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:53.696 "is_configured": true, 00:14:53.696 "data_offset": 2048, 00:14:53.696 "data_size": 63488 00:14:53.696 }, 00:14:53.696 { 00:14:53.696 "name": null, 00:14:53.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.696 "is_configured": false, 00:14:53.696 "data_offset": 0, 00:14:53.696 "data_size": 63488 00:14:53.696 }, 00:14:53.696 { 00:14:53.696 "name": "BaseBdev3", 00:14:53.696 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:53.696 "is_configured": true, 00:14:53.696 "data_offset": 2048, 00:14:53.696 "data_size": 63488 00:14:53.696 }, 00:14:53.696 { 00:14:53.696 "name": "BaseBdev4", 00:14:53.696 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:53.696 "is_configured": true, 00:14:53.696 "data_offset": 2048, 00:14:53.696 "data_size": 63488 00:14:53.696 } 00:14:53.696 ] 00:14:53.696 }' 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=516 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.696 10:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.956 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.956 "name": "raid_bdev1", 00:14:53.956 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:53.956 "strip_size_kb": 0, 00:14:53.956 "state": "online", 00:14:53.956 "raid_level": "raid1", 00:14:53.956 "superblock": true, 00:14:53.956 "num_base_bdevs": 4, 00:14:53.956 "num_base_bdevs_discovered": 3, 00:14:53.956 "num_base_bdevs_operational": 3, 00:14:53.956 "process": { 00:14:53.956 "type": "rebuild", 00:14:53.956 "target": "spare", 00:14:53.956 "progress": { 00:14:53.956 "blocks": 26624, 00:14:53.956 "percent": 41 00:14:53.956 } 00:14:53.956 }, 00:14:53.956 "base_bdevs_list": [ 00:14:53.956 { 00:14:53.956 "name": "spare", 00:14:53.956 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:53.956 "is_configured": true, 00:14:53.956 "data_offset": 2048, 00:14:53.956 "data_size": 63488 00:14:53.956 }, 00:14:53.956 { 00:14:53.956 "name": null, 00:14:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.956 "is_configured": false, 00:14:53.956 "data_offset": 0, 00:14:53.956 "data_size": 63488 00:14:53.956 }, 00:14:53.956 { 00:14:53.956 "name": "BaseBdev3", 00:14:53.956 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:53.956 "is_configured": true, 00:14:53.956 "data_offset": 2048, 00:14:53.956 "data_size": 63488 00:14:53.956 }, 00:14:53.956 { 00:14:53.956 "name": "BaseBdev4", 00:14:53.956 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:53.956 "is_configured": true, 00:14:53.956 "data_offset": 2048, 00:14:53.956 "data_size": 63488 00:14:53.956 } 00:14:53.956 ] 00:14:53.956 }' 00:14:53.956 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.956 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.956 10:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.956 10:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.956 10:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.893 "name": "raid_bdev1", 00:14:54.893 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:54.893 "strip_size_kb": 0, 00:14:54.893 "state": "online", 00:14:54.893 "raid_level": "raid1", 00:14:54.893 "superblock": true, 00:14:54.893 "num_base_bdevs": 4, 00:14:54.893 "num_base_bdevs_discovered": 3, 00:14:54.893 "num_base_bdevs_operational": 3, 00:14:54.893 "process": { 00:14:54.893 "type": "rebuild", 00:14:54.893 "target": "spare", 00:14:54.893 "progress": { 00:14:54.893 "blocks": 51200, 00:14:54.893 "percent": 80 00:14:54.893 } 00:14:54.893 }, 00:14:54.893 "base_bdevs_list": [ 00:14:54.893 { 00:14:54.893 "name": "spare", 00:14:54.893 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:54.893 "is_configured": true, 00:14:54.893 "data_offset": 2048, 00:14:54.893 "data_size": 63488 00:14:54.893 }, 00:14:54.893 { 00:14:54.893 "name": null, 00:14:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.893 "is_configured": false, 00:14:54.893 "data_offset": 0, 00:14:54.893 "data_size": 63488 00:14:54.893 }, 00:14:54.893 { 00:14:54.893 "name": "BaseBdev3", 00:14:54.893 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:54.893 "is_configured": true, 00:14:54.893 "data_offset": 2048, 00:14:54.893 "data_size": 63488 00:14:54.893 }, 00:14:54.893 { 00:14:54.893 "name": "BaseBdev4", 00:14:54.893 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:54.893 "is_configured": true, 00:14:54.893 "data_offset": 2048, 00:14:54.893 "data_size": 63488 00:14:54.893 } 00:14:54.893 ] 00:14:54.893 }' 00:14:54.893 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.153 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.153 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.153 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.153 10:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.412 [2024-11-19 10:09:09.631380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:55.412 [2024-11-19 10:09:09.631526] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:55.412 [2024-11-19 10:09:09.631739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.017 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.277 "name": "raid_bdev1", 00:14:56.277 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:56.277 "strip_size_kb": 0, 00:14:56.277 "state": "online", 00:14:56.277 "raid_level": "raid1", 00:14:56.277 "superblock": true, 00:14:56.277 "num_base_bdevs": 4, 00:14:56.277 "num_base_bdevs_discovered": 3, 00:14:56.277 "num_base_bdevs_operational": 3, 00:14:56.277 "base_bdevs_list": [ 00:14:56.277 { 00:14:56.277 "name": "spare", 00:14:56.277 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:56.277 "is_configured": true, 00:14:56.277 "data_offset": 2048, 00:14:56.277 "data_size": 63488 00:14:56.277 }, 00:14:56.277 { 00:14:56.277 "name": null, 00:14:56.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.277 "is_configured": false, 00:14:56.277 "data_offset": 0, 00:14:56.277 "data_size": 63488 00:14:56.277 }, 00:14:56.277 { 00:14:56.277 "name": "BaseBdev3", 00:14:56.277 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:56.277 "is_configured": true, 00:14:56.277 "data_offset": 2048, 00:14:56.277 "data_size": 63488 00:14:56.277 }, 00:14:56.277 { 00:14:56.277 "name": "BaseBdev4", 00:14:56.277 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:56.277 "is_configured": true, 00:14:56.277 "data_offset": 2048, 00:14:56.277 "data_size": 63488 00:14:56.277 } 00:14:56.277 ] 00:14:56.277 }' 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.277 "name": "raid_bdev1", 00:14:56.277 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:56.277 "strip_size_kb": 0, 00:14:56.277 "state": "online", 00:14:56.277 "raid_level": "raid1", 00:14:56.277 "superblock": true, 00:14:56.277 "num_base_bdevs": 4, 00:14:56.277 "num_base_bdevs_discovered": 3, 00:14:56.277 "num_base_bdevs_operational": 3, 00:14:56.277 "base_bdevs_list": [ 00:14:56.277 { 00:14:56.277 "name": "spare", 00:14:56.277 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:56.277 "is_configured": true, 00:14:56.277 "data_offset": 2048, 00:14:56.277 "data_size": 63488 00:14:56.277 }, 00:14:56.277 { 00:14:56.277 "name": null, 00:14:56.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.277 "is_configured": false, 00:14:56.277 "data_offset": 0, 00:14:56.277 "data_size": 63488 00:14:56.277 }, 00:14:56.277 { 00:14:56.277 "name": "BaseBdev3", 00:14:56.277 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:56.277 "is_configured": true, 00:14:56.277 "data_offset": 2048, 00:14:56.277 "data_size": 63488 00:14:56.277 }, 00:14:56.277 { 00:14:56.277 "name": "BaseBdev4", 00:14:56.277 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:56.277 "is_configured": true, 00:14:56.277 "data_offset": 2048, 00:14:56.277 "data_size": 63488 00:14:56.277 } 00:14:56.277 ] 00:14:56.277 }' 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.277 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.536 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.536 "name": "raid_bdev1", 00:14:56.536 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:56.536 "strip_size_kb": 0, 00:14:56.536 "state": "online", 00:14:56.536 "raid_level": "raid1", 00:14:56.536 "superblock": true, 00:14:56.536 "num_base_bdevs": 4, 00:14:56.536 "num_base_bdevs_discovered": 3, 00:14:56.536 "num_base_bdevs_operational": 3, 00:14:56.536 "base_bdevs_list": [ 00:14:56.536 { 00:14:56.536 "name": "spare", 00:14:56.536 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:56.536 "is_configured": true, 00:14:56.537 "data_offset": 2048, 00:14:56.537 "data_size": 63488 00:14:56.537 }, 00:14:56.537 { 00:14:56.537 "name": null, 00:14:56.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.537 "is_configured": false, 00:14:56.537 "data_offset": 0, 00:14:56.537 "data_size": 63488 00:14:56.537 }, 00:14:56.537 { 00:14:56.537 "name": "BaseBdev3", 00:14:56.537 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:56.537 "is_configured": true, 00:14:56.537 "data_offset": 2048, 00:14:56.537 "data_size": 63488 00:14:56.537 }, 00:14:56.537 { 00:14:56.537 "name": "BaseBdev4", 00:14:56.537 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:56.537 "is_configured": true, 00:14:56.537 "data_offset": 2048, 00:14:56.537 "data_size": 63488 00:14:56.537 } 00:14:56.537 ] 00:14:56.537 }' 00:14:56.537 10:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.537 10:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.105 [2024-11-19 10:09:11.102547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.105 [2024-11-19 10:09:11.102626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.105 [2024-11-19 10:09:11.102780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.105 [2024-11-19 10:09:11.102915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.105 [2024-11-19 10:09:11.102944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.105 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:57.365 /dev/nbd0 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.365 1+0 records in 00:14:57.365 1+0 records out 00:14:57.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223761 s, 18.3 MB/s 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.365 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:57.934 /dev/nbd1 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.934 1+0 records in 00:14:57.934 1+0 records out 00:14:57.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405677 s, 10.1 MB/s 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:57.934 10:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.934 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.502 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.762 [2024-11-19 10:09:12.812966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.762 [2024-11-19 10:09:12.813054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.762 [2024-11-19 10:09:12.813095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:58.762 [2024-11-19 10:09:12.813111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.762 [2024-11-19 10:09:12.816345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.762 [2024-11-19 10:09:12.816423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.762 [2024-11-19 10:09:12.816616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:58.762 [2024-11-19 10:09:12.816690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.762 [2024-11-19 10:09:12.816901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.762 [2024-11-19 10:09:12.817102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:58.762 spare 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.762 [2024-11-19 10:09:12.917274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:58.762 [2024-11-19 10:09:12.917366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:58.762 [2024-11-19 10:09:12.917936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:58.762 [2024-11-19 10:09:12.918276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:58.762 [2024-11-19 10:09:12.918299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:58.762 [2024-11-19 10:09:12.918553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.762 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.762 "name": "raid_bdev1", 00:14:58.762 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:58.762 "strip_size_kb": 0, 00:14:58.762 "state": "online", 00:14:58.762 "raid_level": "raid1", 00:14:58.762 "superblock": true, 00:14:58.762 "num_base_bdevs": 4, 00:14:58.762 "num_base_bdevs_discovered": 3, 00:14:58.762 "num_base_bdevs_operational": 3, 00:14:58.762 "base_bdevs_list": [ 00:14:58.762 { 00:14:58.762 "name": "spare", 00:14:58.762 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:58.763 "is_configured": true, 00:14:58.763 "data_offset": 2048, 00:14:58.763 "data_size": 63488 00:14:58.763 }, 00:14:58.763 { 00:14:58.763 "name": null, 00:14:58.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.763 "is_configured": false, 00:14:58.763 "data_offset": 2048, 00:14:58.763 "data_size": 63488 00:14:58.763 }, 00:14:58.763 { 00:14:58.763 "name": "BaseBdev3", 00:14:58.763 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:58.763 "is_configured": true, 00:14:58.763 "data_offset": 2048, 00:14:58.763 "data_size": 63488 00:14:58.763 }, 00:14:58.763 { 00:14:58.763 "name": "BaseBdev4", 00:14:58.763 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:58.763 "is_configured": true, 00:14:58.763 "data_offset": 2048, 00:14:58.763 "data_size": 63488 00:14:58.763 } 00:14:58.763 ] 00:14:58.763 }' 00:14:58.763 10:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.763 10:09:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.330 "name": "raid_bdev1", 00:14:59.330 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:59.330 "strip_size_kb": 0, 00:14:59.330 "state": "online", 00:14:59.330 "raid_level": "raid1", 00:14:59.330 "superblock": true, 00:14:59.330 "num_base_bdevs": 4, 00:14:59.330 "num_base_bdevs_discovered": 3, 00:14:59.330 "num_base_bdevs_operational": 3, 00:14:59.330 "base_bdevs_list": [ 00:14:59.330 { 00:14:59.330 "name": "spare", 00:14:59.330 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:14:59.330 "is_configured": true, 00:14:59.330 "data_offset": 2048, 00:14:59.330 "data_size": 63488 00:14:59.330 }, 00:14:59.330 { 00:14:59.330 "name": null, 00:14:59.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.330 "is_configured": false, 00:14:59.330 "data_offset": 2048, 00:14:59.330 "data_size": 63488 00:14:59.330 }, 00:14:59.330 { 00:14:59.330 "name": "BaseBdev3", 00:14:59.330 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:59.330 "is_configured": true, 00:14:59.330 "data_offset": 2048, 00:14:59.330 "data_size": 63488 00:14:59.330 }, 00:14:59.330 { 00:14:59.330 "name": "BaseBdev4", 00:14:59.330 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:59.330 "is_configured": true, 00:14:59.330 "data_offset": 2048, 00:14:59.330 "data_size": 63488 00:14:59.330 } 00:14:59.330 ] 00:14:59.330 }' 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.330 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.590 [2024-11-19 10:09:13.681393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.590 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.590 "name": "raid_bdev1", 00:14:59.590 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:14:59.590 "strip_size_kb": 0, 00:14:59.590 "state": "online", 00:14:59.590 "raid_level": "raid1", 00:14:59.590 "superblock": true, 00:14:59.590 "num_base_bdevs": 4, 00:14:59.590 "num_base_bdevs_discovered": 2, 00:14:59.590 "num_base_bdevs_operational": 2, 00:14:59.590 "base_bdevs_list": [ 00:14:59.590 { 00:14:59.590 "name": null, 00:14:59.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.590 "is_configured": false, 00:14:59.590 "data_offset": 0, 00:14:59.590 "data_size": 63488 00:14:59.590 }, 00:14:59.590 { 00:14:59.591 "name": null, 00:14:59.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.591 "is_configured": false, 00:14:59.591 "data_offset": 2048, 00:14:59.591 "data_size": 63488 00:14:59.591 }, 00:14:59.591 { 00:14:59.591 "name": "BaseBdev3", 00:14:59.591 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:14:59.591 "is_configured": true, 00:14:59.591 "data_offset": 2048, 00:14:59.591 "data_size": 63488 00:14:59.591 }, 00:14:59.591 { 00:14:59.591 "name": "BaseBdev4", 00:14:59.591 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:14:59.591 "is_configured": true, 00:14:59.591 "data_offset": 2048, 00:14:59.591 "data_size": 63488 00:14:59.591 } 00:14:59.591 ] 00:14:59.591 }' 00:14:59.591 10:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.591 10:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.158 10:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.158 10:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.158 10:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.158 [2024-11-19 10:09:14.205592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.158 [2024-11-19 10:09:14.205919] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:00.158 [2024-11-19 10:09:14.205944] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:00.158 [2024-11-19 10:09:14.206000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.158 [2024-11-19 10:09:14.220047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:00.158 10:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.158 10:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:00.158 [2024-11-19 10:09:14.222982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.126 "name": "raid_bdev1", 00:15:01.126 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:01.126 "strip_size_kb": 0, 00:15:01.126 "state": "online", 00:15:01.126 "raid_level": "raid1", 00:15:01.126 "superblock": true, 00:15:01.126 "num_base_bdevs": 4, 00:15:01.126 "num_base_bdevs_discovered": 3, 00:15:01.126 "num_base_bdevs_operational": 3, 00:15:01.126 "process": { 00:15:01.126 "type": "rebuild", 00:15:01.126 "target": "spare", 00:15:01.126 "progress": { 00:15:01.126 "blocks": 20480, 00:15:01.126 "percent": 32 00:15:01.126 } 00:15:01.126 }, 00:15:01.126 "base_bdevs_list": [ 00:15:01.126 { 00:15:01.126 "name": "spare", 00:15:01.126 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:15:01.126 "is_configured": true, 00:15:01.126 "data_offset": 2048, 00:15:01.126 "data_size": 63488 00:15:01.126 }, 00:15:01.126 { 00:15:01.126 "name": null, 00:15:01.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.126 "is_configured": false, 00:15:01.126 "data_offset": 2048, 00:15:01.126 "data_size": 63488 00:15:01.126 }, 00:15:01.126 { 00:15:01.126 "name": "BaseBdev3", 00:15:01.126 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:01.126 "is_configured": true, 00:15:01.126 "data_offset": 2048, 00:15:01.126 "data_size": 63488 00:15:01.126 }, 00:15:01.126 { 00:15:01.126 "name": "BaseBdev4", 00:15:01.126 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:01.126 "is_configured": true, 00:15:01.126 "data_offset": 2048, 00:15:01.126 "data_size": 63488 00:15:01.126 } 00:15:01.126 ] 00:15:01.126 }' 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.126 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.385 [2024-11-19 10:09:15.388755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.385 [2024-11-19 10:09:15.434367] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.385 [2024-11-19 10:09:15.434490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.385 [2024-11-19 10:09:15.434522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.385 [2024-11-19 10:09:15.434534] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.385 "name": "raid_bdev1", 00:15:01.385 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:01.385 "strip_size_kb": 0, 00:15:01.385 "state": "online", 00:15:01.385 "raid_level": "raid1", 00:15:01.385 "superblock": true, 00:15:01.385 "num_base_bdevs": 4, 00:15:01.385 "num_base_bdevs_discovered": 2, 00:15:01.385 "num_base_bdevs_operational": 2, 00:15:01.385 "base_bdevs_list": [ 00:15:01.385 { 00:15:01.385 "name": null, 00:15:01.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.385 "is_configured": false, 00:15:01.385 "data_offset": 0, 00:15:01.385 "data_size": 63488 00:15:01.385 }, 00:15:01.385 { 00:15:01.385 "name": null, 00:15:01.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.385 "is_configured": false, 00:15:01.385 "data_offset": 2048, 00:15:01.385 "data_size": 63488 00:15:01.385 }, 00:15:01.385 { 00:15:01.385 "name": "BaseBdev3", 00:15:01.385 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:01.385 "is_configured": true, 00:15:01.385 "data_offset": 2048, 00:15:01.385 "data_size": 63488 00:15:01.385 }, 00:15:01.385 { 00:15:01.385 "name": "BaseBdev4", 00:15:01.385 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:01.385 "is_configured": true, 00:15:01.385 "data_offset": 2048, 00:15:01.385 "data_size": 63488 00:15:01.385 } 00:15:01.385 ] 00:15:01.385 }' 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.385 10:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.953 10:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.953 10:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.953 10:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.953 [2024-11-19 10:09:16.007856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.953 [2024-11-19 10:09:16.007957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.953 [2024-11-19 10:09:16.008008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:01.953 [2024-11-19 10:09:16.008035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.953 [2024-11-19 10:09:16.008733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.953 [2024-11-19 10:09:16.008777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.953 [2024-11-19 10:09:16.008937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:01.953 [2024-11-19 10:09:16.008959] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:01.953 [2024-11-19 10:09:16.008985] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:01.953 [2024-11-19 10:09:16.009034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.953 [2024-11-19 10:09:16.023083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:01.953 spare 00:15:01.953 10:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.953 10:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:01.953 [2024-11-19 10:09:16.025931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.891 "name": "raid_bdev1", 00:15:02.891 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:02.891 "strip_size_kb": 0, 00:15:02.891 "state": "online", 00:15:02.891 "raid_level": "raid1", 00:15:02.891 "superblock": true, 00:15:02.891 "num_base_bdevs": 4, 00:15:02.891 "num_base_bdevs_discovered": 3, 00:15:02.891 "num_base_bdevs_operational": 3, 00:15:02.891 "process": { 00:15:02.891 "type": "rebuild", 00:15:02.891 "target": "spare", 00:15:02.891 "progress": { 00:15:02.891 "blocks": 20480, 00:15:02.891 "percent": 32 00:15:02.891 } 00:15:02.891 }, 00:15:02.891 "base_bdevs_list": [ 00:15:02.891 { 00:15:02.891 "name": "spare", 00:15:02.891 "uuid": "28110dab-6d07-5d55-9dc7-b455741be4a2", 00:15:02.891 "is_configured": true, 00:15:02.891 "data_offset": 2048, 00:15:02.891 "data_size": 63488 00:15:02.891 }, 00:15:02.891 { 00:15:02.891 "name": null, 00:15:02.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.891 "is_configured": false, 00:15:02.891 "data_offset": 2048, 00:15:02.891 "data_size": 63488 00:15:02.891 }, 00:15:02.891 { 00:15:02.891 "name": "BaseBdev3", 00:15:02.891 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:02.891 "is_configured": true, 00:15:02.891 "data_offset": 2048, 00:15:02.891 "data_size": 63488 00:15:02.891 }, 00:15:02.891 { 00:15:02.891 "name": "BaseBdev4", 00:15:02.891 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:02.891 "is_configured": true, 00:15:02.891 "data_offset": 2048, 00:15:02.891 "data_size": 63488 00:15:02.891 } 00:15:02.891 ] 00:15:02.891 }' 00:15:02.891 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.151 [2024-11-19 10:09:17.191886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.151 [2024-11-19 10:09:17.237542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.151 [2024-11-19 10:09:17.237667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.151 [2024-11-19 10:09:17.237694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.151 [2024-11-19 10:09:17.237709] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.151 "name": "raid_bdev1", 00:15:03.151 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:03.151 "strip_size_kb": 0, 00:15:03.151 "state": "online", 00:15:03.151 "raid_level": "raid1", 00:15:03.151 "superblock": true, 00:15:03.151 "num_base_bdevs": 4, 00:15:03.151 "num_base_bdevs_discovered": 2, 00:15:03.151 "num_base_bdevs_operational": 2, 00:15:03.151 "base_bdevs_list": [ 00:15:03.151 { 00:15:03.151 "name": null, 00:15:03.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.151 "is_configured": false, 00:15:03.151 "data_offset": 0, 00:15:03.151 "data_size": 63488 00:15:03.151 }, 00:15:03.151 { 00:15:03.151 "name": null, 00:15:03.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.151 "is_configured": false, 00:15:03.151 "data_offset": 2048, 00:15:03.151 "data_size": 63488 00:15:03.151 }, 00:15:03.151 { 00:15:03.151 "name": "BaseBdev3", 00:15:03.151 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:03.151 "is_configured": true, 00:15:03.151 "data_offset": 2048, 00:15:03.151 "data_size": 63488 00:15:03.151 }, 00:15:03.151 { 00:15:03.151 "name": "BaseBdev4", 00:15:03.151 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:03.151 "is_configured": true, 00:15:03.151 "data_offset": 2048, 00:15:03.151 "data_size": 63488 00:15:03.151 } 00:15:03.151 ] 00:15:03.151 }' 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.151 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.719 "name": "raid_bdev1", 00:15:03.719 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:03.719 "strip_size_kb": 0, 00:15:03.719 "state": "online", 00:15:03.719 "raid_level": "raid1", 00:15:03.719 "superblock": true, 00:15:03.719 "num_base_bdevs": 4, 00:15:03.719 "num_base_bdevs_discovered": 2, 00:15:03.719 "num_base_bdevs_operational": 2, 00:15:03.719 "base_bdevs_list": [ 00:15:03.719 { 00:15:03.719 "name": null, 00:15:03.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.719 "is_configured": false, 00:15:03.719 "data_offset": 0, 00:15:03.719 "data_size": 63488 00:15:03.719 }, 00:15:03.719 { 00:15:03.719 "name": null, 00:15:03.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.719 "is_configured": false, 00:15:03.719 "data_offset": 2048, 00:15:03.719 "data_size": 63488 00:15:03.719 }, 00:15:03.719 { 00:15:03.719 "name": "BaseBdev3", 00:15:03.719 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:03.719 "is_configured": true, 00:15:03.719 "data_offset": 2048, 00:15:03.719 "data_size": 63488 00:15:03.719 }, 00:15:03.719 { 00:15:03.719 "name": "BaseBdev4", 00:15:03.719 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:03.719 "is_configured": true, 00:15:03.719 "data_offset": 2048, 00:15:03.719 "data_size": 63488 00:15:03.719 } 00:15:03.719 ] 00:15:03.719 }' 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.719 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.979 [2024-11-19 10:09:17.987022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.979 [2024-11-19 10:09:17.987106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.979 [2024-11-19 10:09:17.987143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:03.979 [2024-11-19 10:09:17.987162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.979 [2024-11-19 10:09:17.987831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.979 [2024-11-19 10:09:17.987891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.979 [2024-11-19 10:09:17.988011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:03.979 [2024-11-19 10:09:17.988053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:03.979 [2024-11-19 10:09:17.988068] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:03.979 [2024-11-19 10:09:17.988101] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:03.979 BaseBdev1 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.979 10:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.916 10:09:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.916 "name": "raid_bdev1", 00:15:04.916 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:04.916 "strip_size_kb": 0, 00:15:04.916 "state": "online", 00:15:04.916 "raid_level": "raid1", 00:15:04.916 "superblock": true, 00:15:04.916 "num_base_bdevs": 4, 00:15:04.916 "num_base_bdevs_discovered": 2, 00:15:04.916 "num_base_bdevs_operational": 2, 00:15:04.916 "base_bdevs_list": [ 00:15:04.916 { 00:15:04.916 "name": null, 00:15:04.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.916 "is_configured": false, 00:15:04.916 "data_offset": 0, 00:15:04.916 "data_size": 63488 00:15:04.916 }, 00:15:04.916 { 00:15:04.916 "name": null, 00:15:04.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.916 "is_configured": false, 00:15:04.916 "data_offset": 2048, 00:15:04.916 "data_size": 63488 00:15:04.916 }, 00:15:04.916 { 00:15:04.916 "name": "BaseBdev3", 00:15:04.916 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:04.916 "is_configured": true, 00:15:04.916 "data_offset": 2048, 00:15:04.916 "data_size": 63488 00:15:04.916 }, 00:15:04.916 { 00:15:04.916 "name": "BaseBdev4", 00:15:04.916 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:04.916 "is_configured": true, 00:15:04.916 "data_offset": 2048, 00:15:04.916 "data_size": 63488 00:15:04.916 } 00:15:04.916 ] 00:15:04.916 }' 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.916 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.550 "name": "raid_bdev1", 00:15:05.550 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:05.550 "strip_size_kb": 0, 00:15:05.550 "state": "online", 00:15:05.550 "raid_level": "raid1", 00:15:05.550 "superblock": true, 00:15:05.550 "num_base_bdevs": 4, 00:15:05.550 "num_base_bdevs_discovered": 2, 00:15:05.550 "num_base_bdevs_operational": 2, 00:15:05.550 "base_bdevs_list": [ 00:15:05.550 { 00:15:05.550 "name": null, 00:15:05.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.550 "is_configured": false, 00:15:05.550 "data_offset": 0, 00:15:05.550 "data_size": 63488 00:15:05.550 }, 00:15:05.550 { 00:15:05.550 "name": null, 00:15:05.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.550 "is_configured": false, 00:15:05.550 "data_offset": 2048, 00:15:05.550 "data_size": 63488 00:15:05.550 }, 00:15:05.550 { 00:15:05.550 "name": "BaseBdev3", 00:15:05.550 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:05.550 "is_configured": true, 00:15:05.550 "data_offset": 2048, 00:15:05.550 "data_size": 63488 00:15:05.550 }, 00:15:05.550 { 00:15:05.550 "name": "BaseBdev4", 00:15:05.550 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:05.550 "is_configured": true, 00:15:05.550 "data_offset": 2048, 00:15:05.550 "data_size": 63488 00:15:05.550 } 00:15:05.550 ] 00:15:05.550 }' 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.550 [2024-11-19 10:09:19.691605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.550 [2024-11-19 10:09:19.691946] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:05.550 [2024-11-19 10:09:19.691969] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:05.550 request: 00:15:05.550 { 00:15:05.550 "base_bdev": "BaseBdev1", 00:15:05.550 "raid_bdev": "raid_bdev1", 00:15:05.550 "method": "bdev_raid_add_base_bdev", 00:15:05.550 "req_id": 1 00:15:05.550 } 00:15:05.550 Got JSON-RPC error response 00:15:05.550 response: 00:15:05.550 { 00:15:05.550 "code": -22, 00:15:05.550 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:05.550 } 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.550 10:09:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.485 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.744 "name": "raid_bdev1", 00:15:06.744 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:06.744 "strip_size_kb": 0, 00:15:06.744 "state": "online", 00:15:06.744 "raid_level": "raid1", 00:15:06.744 "superblock": true, 00:15:06.744 "num_base_bdevs": 4, 00:15:06.744 "num_base_bdevs_discovered": 2, 00:15:06.744 "num_base_bdevs_operational": 2, 00:15:06.744 "base_bdevs_list": [ 00:15:06.744 { 00:15:06.744 "name": null, 00:15:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.744 "is_configured": false, 00:15:06.744 "data_offset": 0, 00:15:06.744 "data_size": 63488 00:15:06.744 }, 00:15:06.744 { 00:15:06.744 "name": null, 00:15:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.744 "is_configured": false, 00:15:06.744 "data_offset": 2048, 00:15:06.744 "data_size": 63488 00:15:06.744 }, 00:15:06.744 { 00:15:06.744 "name": "BaseBdev3", 00:15:06.744 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:06.744 "is_configured": true, 00:15:06.744 "data_offset": 2048, 00:15:06.744 "data_size": 63488 00:15:06.744 }, 00:15:06.744 { 00:15:06.744 "name": "BaseBdev4", 00:15:06.744 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:06.744 "is_configured": true, 00:15:06.744 "data_offset": 2048, 00:15:06.744 "data_size": 63488 00:15:06.744 } 00:15:06.744 ] 00:15:06.744 }' 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.744 10:09:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.002 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.261 "name": "raid_bdev1", 00:15:07.261 "uuid": "b05216bb-0591-4f7b-beb3-da613703a7f2", 00:15:07.261 "strip_size_kb": 0, 00:15:07.261 "state": "online", 00:15:07.261 "raid_level": "raid1", 00:15:07.261 "superblock": true, 00:15:07.261 "num_base_bdevs": 4, 00:15:07.261 "num_base_bdevs_discovered": 2, 00:15:07.261 "num_base_bdevs_operational": 2, 00:15:07.261 "base_bdevs_list": [ 00:15:07.261 { 00:15:07.261 "name": null, 00:15:07.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.261 "is_configured": false, 00:15:07.261 "data_offset": 0, 00:15:07.261 "data_size": 63488 00:15:07.261 }, 00:15:07.261 { 00:15:07.261 "name": null, 00:15:07.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.261 "is_configured": false, 00:15:07.261 "data_offset": 2048, 00:15:07.261 "data_size": 63488 00:15:07.261 }, 00:15:07.261 { 00:15:07.261 "name": "BaseBdev3", 00:15:07.261 "uuid": "6f60471e-d0c1-5c13-b675-c0dd54fd14e2", 00:15:07.261 "is_configured": true, 00:15:07.261 "data_offset": 2048, 00:15:07.261 "data_size": 63488 00:15:07.261 }, 00:15:07.261 { 00:15:07.261 "name": "BaseBdev4", 00:15:07.261 "uuid": "c4124fe7-4a7d-5a67-bd4c-f7f3624a78ed", 00:15:07.261 "is_configured": true, 00:15:07.261 "data_offset": 2048, 00:15:07.261 "data_size": 63488 00:15:07.261 } 00:15:07.261 ] 00:15:07.261 }' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78230 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78230 ']' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78230 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78230 00:15:07.261 killing process with pid 78230 00:15:07.261 Received shutdown signal, test time was about 60.000000 seconds 00:15:07.261 00:15:07.261 Latency(us) 00:15:07.261 [2024-11-19T10:09:21.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.261 [2024-11-19T10:09:21.493Z] =================================================================================================================== 00:15:07.261 [2024-11-19T10:09:21.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78230' 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78230 00:15:07.261 [2024-11-19 10:09:21.468285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.261 10:09:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78230 00:15:07.261 [2024-11-19 10:09:21.468482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.261 [2024-11-19 10:09:21.468598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.261 [2024-11-19 10:09:21.468615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:07.829 [2024-11-19 10:09:21.948641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.206 10:09:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.206 00:15:09.206 real 0m30.370s 00:15:09.206 user 0m37.029s 00:15:09.206 sys 0m4.369s 00:15:09.206 10:09:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.206 10:09:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.206 ************************************ 00:15:09.206 END TEST raid_rebuild_test_sb 00:15:09.206 ************************************ 00:15:09.206 10:09:23 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:09.206 10:09:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.206 10:09:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.206 10:09:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.206 ************************************ 00:15:09.206 START TEST raid_rebuild_test_io 00:15:09.206 ************************************ 00:15:09.206 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79028 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79028 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79028 ']' 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.207 10:09:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.207 [2024-11-19 10:09:23.279157] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:09.207 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.207 Zero copy mechanism will not be used. 00:15:09.207 [2024-11-19 10:09:23.279357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79028 ] 00:15:09.465 [2024-11-19 10:09:23.465331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.465 [2024-11-19 10:09:23.616435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.728 [2024-11-19 10:09:23.856636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.728 [2024-11-19 10:09:23.856702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 BaseBdev1_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 [2024-11-19 10:09:24.330811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.304 [2024-11-19 10:09:24.331487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.304 [2024-11-19 10:09:24.331555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.304 [2024-11-19 10:09:24.331580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.304 [2024-11-19 10:09:24.334895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.304 [2024-11-19 10:09:24.334947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.304 BaseBdev1 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 BaseBdev2_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 [2024-11-19 10:09:24.393729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.304 [2024-11-19 10:09:24.393837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.304 [2024-11-19 10:09:24.393870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.304 [2024-11-19 10:09:24.393902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.304 [2024-11-19 10:09:24.396945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.304 [2024-11-19 10:09:24.396994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.304 BaseBdev2 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 BaseBdev3_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 [2024-11-19 10:09:24.469576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.304 [2024-11-19 10:09:24.469667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.304 [2024-11-19 10:09:24.469720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.304 [2024-11-19 10:09:24.469741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.304 [2024-11-19 10:09:24.473038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.304 [2024-11-19 10:09:24.473090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.304 BaseBdev3 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 BaseBdev4_malloc 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.304 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.304 [2024-11-19 10:09:24.533170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:10.304 [2024-11-19 10:09:24.533278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.304 [2024-11-19 10:09:24.533309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:10.304 [2024-11-19 10:09:24.533327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.563 [2024-11-19 10:09:24.536409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.563 [2024-11-19 10:09:24.536462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:10.563 BaseBdev4 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.563 spare_malloc 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.563 spare_delay 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.563 [2024-11-19 10:09:24.607157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.563 [2024-11-19 10:09:24.607246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.563 [2024-11-19 10:09:24.607297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:10.563 [2024-11-19 10:09:24.607316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.563 [2024-11-19 10:09:24.610691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.563 [2024-11-19 10:09:24.610742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.563 spare 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.563 [2024-11-19 10:09:24.619201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.563 [2024-11-19 10:09:24.621992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.563 [2024-11-19 10:09:24.622099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.563 [2024-11-19 10:09:24.622191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.563 [2024-11-19 10:09:24.622334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.563 [2024-11-19 10:09:24.622357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:10.563 [2024-11-19 10:09:24.622720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.563 [2024-11-19 10:09:24.623020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.563 [2024-11-19 10:09:24.623050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.563 [2024-11-19 10:09:24.623337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.563 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.563 "name": "raid_bdev1", 00:15:10.563 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:10.563 "strip_size_kb": 0, 00:15:10.563 "state": "online", 00:15:10.563 "raid_level": "raid1", 00:15:10.563 "superblock": false, 00:15:10.563 "num_base_bdevs": 4, 00:15:10.563 "num_base_bdevs_discovered": 4, 00:15:10.563 "num_base_bdevs_operational": 4, 00:15:10.563 "base_bdevs_list": [ 00:15:10.563 { 00:15:10.563 "name": "BaseBdev1", 00:15:10.563 "uuid": "9575d700-7db8-5e33-97b0-af68c0addc88", 00:15:10.563 "is_configured": true, 00:15:10.563 "data_offset": 0, 00:15:10.563 "data_size": 65536 00:15:10.563 }, 00:15:10.563 { 00:15:10.563 "name": "BaseBdev2", 00:15:10.563 "uuid": "e6c61203-5e1c-56e5-bbad-4044fe546673", 00:15:10.563 "is_configured": true, 00:15:10.563 "data_offset": 0, 00:15:10.563 "data_size": 65536 00:15:10.563 }, 00:15:10.563 { 00:15:10.563 "name": "BaseBdev3", 00:15:10.563 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:10.563 "is_configured": true, 00:15:10.563 "data_offset": 0, 00:15:10.563 "data_size": 65536 00:15:10.563 }, 00:15:10.563 { 00:15:10.563 "name": "BaseBdev4", 00:15:10.564 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:10.564 "is_configured": true, 00:15:10.564 "data_offset": 0, 00:15:10.564 "data_size": 65536 00:15:10.564 } 00:15:10.564 ] 00:15:10.564 }' 00:15:10.564 10:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.564 10:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.131 [2024-11-19 10:09:25.155998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.131 [2024-11-19 10:09:25.271535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.131 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.131 "name": "raid_bdev1", 00:15:11.131 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:11.131 "strip_size_kb": 0, 00:15:11.131 "state": "online", 00:15:11.131 "raid_level": "raid1", 00:15:11.131 "superblock": false, 00:15:11.131 "num_base_bdevs": 4, 00:15:11.131 "num_base_bdevs_discovered": 3, 00:15:11.131 "num_base_bdevs_operational": 3, 00:15:11.131 "base_bdevs_list": [ 00:15:11.131 { 00:15:11.131 "name": null, 00:15:11.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.131 "is_configured": false, 00:15:11.131 "data_offset": 0, 00:15:11.131 "data_size": 65536 00:15:11.131 }, 00:15:11.131 { 00:15:11.131 "name": "BaseBdev2", 00:15:11.132 "uuid": "e6c61203-5e1c-56e5-bbad-4044fe546673", 00:15:11.132 "is_configured": true, 00:15:11.132 "data_offset": 0, 00:15:11.132 "data_size": 65536 00:15:11.132 }, 00:15:11.132 { 00:15:11.132 "name": "BaseBdev3", 00:15:11.132 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:11.132 "is_configured": true, 00:15:11.132 "data_offset": 0, 00:15:11.132 "data_size": 65536 00:15:11.132 }, 00:15:11.132 { 00:15:11.132 "name": "BaseBdev4", 00:15:11.132 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:11.132 "is_configured": true, 00:15:11.132 "data_offset": 0, 00:15:11.132 "data_size": 65536 00:15:11.132 } 00:15:11.132 ] 00:15:11.132 }' 00:15:11.132 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.132 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.390 [2024-11-19 10:09:25.409041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:11.390 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:11.390 Zero copy mechanism will not be used. 00:15:11.390 Running I/O for 60 seconds... 00:15:11.649 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.649 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.649 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.649 [2024-11-19 10:09:25.793428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.649 10:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.649 10:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:11.649 [2024-11-19 10:09:25.838627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:11.649 [2024-11-19 10:09:25.841577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.907 [2024-11-19 10:09:25.967036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.907 [2024-11-19 10:09:25.967975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.164 [2024-11-19 10:09:26.184211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.164 [2024-11-19 10:09:26.185397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.423 143.00 IOPS, 429.00 MiB/s [2024-11-19T10:09:26.655Z] [2024-11-19 10:09:26.557615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:12.681 [2024-11-19 10:09:26.694175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.681 [2024-11-19 10:09:26.694721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.681 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.681 "name": "raid_bdev1", 00:15:12.681 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:12.681 "strip_size_kb": 0, 00:15:12.681 "state": "online", 00:15:12.681 "raid_level": "raid1", 00:15:12.681 "superblock": false, 00:15:12.681 "num_base_bdevs": 4, 00:15:12.681 "num_base_bdevs_discovered": 4, 00:15:12.681 "num_base_bdevs_operational": 4, 00:15:12.681 "process": { 00:15:12.681 "type": "rebuild", 00:15:12.681 "target": "spare", 00:15:12.681 "progress": { 00:15:12.681 "blocks": 12288, 00:15:12.681 "percent": 18 00:15:12.681 } 00:15:12.681 }, 00:15:12.681 "base_bdevs_list": [ 00:15:12.681 { 00:15:12.681 "name": "spare", 00:15:12.681 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:12.681 "is_configured": true, 00:15:12.681 "data_offset": 0, 00:15:12.681 "data_size": 65536 00:15:12.681 }, 00:15:12.681 { 00:15:12.681 "name": "BaseBdev2", 00:15:12.681 "uuid": "e6c61203-5e1c-56e5-bbad-4044fe546673", 00:15:12.682 "is_configured": true, 00:15:12.682 "data_offset": 0, 00:15:12.682 "data_size": 65536 00:15:12.682 }, 00:15:12.682 { 00:15:12.682 "name": "BaseBdev3", 00:15:12.682 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:12.682 "is_configured": true, 00:15:12.682 "data_offset": 0, 00:15:12.682 "data_size": 65536 00:15:12.682 }, 00:15:12.682 { 00:15:12.682 "name": "BaseBdev4", 00:15:12.682 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:12.682 "is_configured": true, 00:15:12.682 "data_offset": 0, 00:15:12.682 "data_size": 65536 00:15:12.682 } 00:15:12.682 ] 00:15:12.682 }' 00:15:12.682 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.941 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.941 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.941 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.941 10:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.941 10:09:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.941 10:09:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.941 [2024-11-19 10:09:26.981164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.941 [2024-11-19 10:09:27.040389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:12.941 [2024-11-19 10:09:27.041135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:12.941 [2024-11-19 10:09:27.145496] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.941 [2024-11-19 10:09:27.158989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.941 [2024-11-19 10:09:27.159175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.941 [2024-11-19 10:09:27.159242] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.200 [2024-11-19 10:09:27.184750] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.200 "name": "raid_bdev1", 00:15:13.200 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:13.200 "strip_size_kb": 0, 00:15:13.200 "state": "online", 00:15:13.200 "raid_level": "raid1", 00:15:13.200 "superblock": false, 00:15:13.200 "num_base_bdevs": 4, 00:15:13.200 "num_base_bdevs_discovered": 3, 00:15:13.200 "num_base_bdevs_operational": 3, 00:15:13.200 "base_bdevs_list": [ 00:15:13.200 { 00:15:13.200 "name": null, 00:15:13.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.200 "is_configured": false, 00:15:13.200 "data_offset": 0, 00:15:13.200 "data_size": 65536 00:15:13.200 }, 00:15:13.200 { 00:15:13.200 "name": "BaseBdev2", 00:15:13.200 "uuid": "e6c61203-5e1c-56e5-bbad-4044fe546673", 00:15:13.200 "is_configured": true, 00:15:13.200 "data_offset": 0, 00:15:13.200 "data_size": 65536 00:15:13.200 }, 00:15:13.200 { 00:15:13.200 "name": "BaseBdev3", 00:15:13.200 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:13.200 "is_configured": true, 00:15:13.200 "data_offset": 0, 00:15:13.200 "data_size": 65536 00:15:13.200 }, 00:15:13.200 { 00:15:13.200 "name": "BaseBdev4", 00:15:13.200 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:13.200 "is_configured": true, 00:15:13.200 "data_offset": 0, 00:15:13.200 "data_size": 65536 00:15:13.200 } 00:15:13.200 ] 00:15:13.200 }' 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.200 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.769 131.00 IOPS, 393.00 MiB/s [2024-11-19T10:09:28.001Z] 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.769 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.769 "name": "raid_bdev1", 00:15:13.769 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:13.769 "strip_size_kb": 0, 00:15:13.769 "state": "online", 00:15:13.769 "raid_level": "raid1", 00:15:13.769 "superblock": false, 00:15:13.769 "num_base_bdevs": 4, 00:15:13.769 "num_base_bdevs_discovered": 3, 00:15:13.769 "num_base_bdevs_operational": 3, 00:15:13.769 "base_bdevs_list": [ 00:15:13.769 { 00:15:13.769 "name": null, 00:15:13.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.769 "is_configured": false, 00:15:13.769 "data_offset": 0, 00:15:13.769 "data_size": 65536 00:15:13.769 }, 00:15:13.769 { 00:15:13.769 "name": "BaseBdev2", 00:15:13.769 "uuid": "e6c61203-5e1c-56e5-bbad-4044fe546673", 00:15:13.769 "is_configured": true, 00:15:13.769 "data_offset": 0, 00:15:13.769 "data_size": 65536 00:15:13.769 }, 00:15:13.770 { 00:15:13.770 "name": "BaseBdev3", 00:15:13.770 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:13.770 "is_configured": true, 00:15:13.770 "data_offset": 0, 00:15:13.770 "data_size": 65536 00:15:13.770 }, 00:15:13.770 { 00:15:13.770 "name": "BaseBdev4", 00:15:13.770 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:13.770 "is_configured": true, 00:15:13.770 "data_offset": 0, 00:15:13.770 "data_size": 65536 00:15:13.770 } 00:15:13.770 ] 00:15:13.770 }' 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.770 [2024-11-19 10:09:27.908159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.770 10:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.770 [2024-11-19 10:09:27.985901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:13.770 [2024-11-19 10:09:27.989106] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.037 [2024-11-19 10:09:28.102432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.037 [2024-11-19 10:09:28.103302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.296 [2024-11-19 10:09:28.319905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.296 [2024-11-19 10:09:28.321133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.555 139.67 IOPS, 419.00 MiB/s [2024-11-19T10:09:28.787Z] [2024-11-19 10:09:28.704609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:14.555 [2024-11-19 10:09:28.707050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:14.814 [2024-11-19 10:09:28.941257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.814 [2024-11-19 10:09:28.941937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.814 10:09:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.814 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.814 "name": "raid_bdev1", 00:15:14.814 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:14.814 "strip_size_kb": 0, 00:15:14.814 "state": "online", 00:15:14.814 "raid_level": "raid1", 00:15:14.814 "superblock": false, 00:15:14.814 "num_base_bdevs": 4, 00:15:14.814 "num_base_bdevs_discovered": 4, 00:15:14.814 "num_base_bdevs_operational": 4, 00:15:14.814 "process": { 00:15:14.814 "type": "rebuild", 00:15:14.814 "target": "spare", 00:15:14.814 "progress": { 00:15:14.814 "blocks": 10240, 00:15:14.814 "percent": 15 00:15:14.814 } 00:15:14.814 }, 00:15:14.814 "base_bdevs_list": [ 00:15:14.814 { 00:15:14.814 "name": "spare", 00:15:14.814 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:14.814 "is_configured": true, 00:15:14.814 "data_offset": 0, 00:15:14.814 "data_size": 65536 00:15:14.814 }, 00:15:14.814 { 00:15:14.814 "name": "BaseBdev2", 00:15:14.814 "uuid": "e6c61203-5e1c-56e5-bbad-4044fe546673", 00:15:14.814 "is_configured": true, 00:15:14.814 "data_offset": 0, 00:15:14.814 "data_size": 65536 00:15:14.814 }, 00:15:14.814 { 00:15:14.814 "name": "BaseBdev3", 00:15:14.814 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:14.814 "is_configured": true, 00:15:14.814 "data_offset": 0, 00:15:14.814 "data_size": 65536 00:15:14.814 }, 00:15:14.814 { 00:15:14.814 "name": "BaseBdev4", 00:15:14.814 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:14.814 "is_configured": true, 00:15:14.814 "data_offset": 0, 00:15:14.814 "data_size": 65536 00:15:14.814 } 00:15:14.814 ] 00:15:14.814 }' 00:15:14.814 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.074 [2024-11-19 10:09:29.130230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.074 [2024-11-19 10:09:29.249839] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:15.074 [2024-11-19 10:09:29.250266] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.074 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.333 "name": "raid_bdev1", 00:15:15.333 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:15.333 "strip_size_kb": 0, 00:15:15.333 "state": "online", 00:15:15.333 "raid_level": "raid1", 00:15:15.333 "superblock": false, 00:15:15.333 "num_base_bdevs": 4, 00:15:15.333 "num_base_bdevs_discovered": 3, 00:15:15.333 "num_base_bdevs_operational": 3, 00:15:15.333 "process": { 00:15:15.333 "type": "rebuild", 00:15:15.333 "target": "spare", 00:15:15.333 "progress": { 00:15:15.333 "blocks": 12288, 00:15:15.333 "percent": 18 00:15:15.333 } 00:15:15.333 }, 00:15:15.333 "base_bdevs_list": [ 00:15:15.333 { 00:15:15.333 "name": "spare", 00:15:15.333 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:15.333 "is_configured": true, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 }, 00:15:15.333 { 00:15:15.333 "name": null, 00:15:15.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.333 "is_configured": false, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 }, 00:15:15.333 { 00:15:15.333 "name": "BaseBdev3", 00:15:15.333 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:15.333 "is_configured": true, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 }, 00:15:15.333 { 00:15:15.333 "name": "BaseBdev4", 00:15:15.333 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:15.333 "is_configured": true, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 } 00:15:15.333 ] 00:15:15.333 }' 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.333 [2024-11-19 10:09:29.424504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:15.333 120.25 IOPS, 360.75 MiB/s [2024-11-19T10:09:29.565Z] 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.333 "name": "raid_bdev1", 00:15:15.333 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:15.333 "strip_size_kb": 0, 00:15:15.333 "state": "online", 00:15:15.333 "raid_level": "raid1", 00:15:15.333 "superblock": false, 00:15:15.333 "num_base_bdevs": 4, 00:15:15.333 "num_base_bdevs_discovered": 3, 00:15:15.333 "num_base_bdevs_operational": 3, 00:15:15.333 "process": { 00:15:15.333 "type": "rebuild", 00:15:15.333 "target": "spare", 00:15:15.333 "progress": { 00:15:15.333 "blocks": 14336, 00:15:15.333 "percent": 21 00:15:15.333 } 00:15:15.333 }, 00:15:15.333 "base_bdevs_list": [ 00:15:15.333 { 00:15:15.333 "name": "spare", 00:15:15.333 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:15.333 "is_configured": true, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 }, 00:15:15.333 { 00:15:15.333 "name": null, 00:15:15.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.333 "is_configured": false, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 }, 00:15:15.333 { 00:15:15.333 "name": "BaseBdev3", 00:15:15.333 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:15.333 "is_configured": true, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 }, 00:15:15.333 { 00:15:15.333 "name": "BaseBdev4", 00:15:15.333 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:15.333 "is_configured": true, 00:15:15.333 "data_offset": 0, 00:15:15.333 "data_size": 65536 00:15:15.333 } 00:15:15.333 ] 00:15:15.333 }' 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.333 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.592 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.592 10:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.592 [2024-11-19 10:09:29.637397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:15.592 [2024-11-19 10:09:29.638612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:16.160 [2024-11-19 10:09:30.355616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:16.420 105.40 IOPS, 316.20 MiB/s [2024-11-19T10:09:30.652Z] 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.420 10:09:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.679 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.679 "name": "raid_bdev1", 00:15:16.679 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:16.679 "strip_size_kb": 0, 00:15:16.679 "state": "online", 00:15:16.679 "raid_level": "raid1", 00:15:16.679 "superblock": false, 00:15:16.679 "num_base_bdevs": 4, 00:15:16.679 "num_base_bdevs_discovered": 3, 00:15:16.679 "num_base_bdevs_operational": 3, 00:15:16.679 "process": { 00:15:16.679 "type": "rebuild", 00:15:16.679 "target": "spare", 00:15:16.679 "progress": { 00:15:16.679 "blocks": 28672, 00:15:16.679 "percent": 43 00:15:16.679 } 00:15:16.679 }, 00:15:16.679 "base_bdevs_list": [ 00:15:16.679 { 00:15:16.679 "name": "spare", 00:15:16.679 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:16.679 "is_configured": true, 00:15:16.679 "data_offset": 0, 00:15:16.679 "data_size": 65536 00:15:16.679 }, 00:15:16.679 { 00:15:16.679 "name": null, 00:15:16.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.679 "is_configured": false, 00:15:16.679 "data_offset": 0, 00:15:16.679 "data_size": 65536 00:15:16.679 }, 00:15:16.679 { 00:15:16.679 "name": "BaseBdev3", 00:15:16.679 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:16.679 "is_configured": true, 00:15:16.679 "data_offset": 0, 00:15:16.679 "data_size": 65536 00:15:16.679 }, 00:15:16.679 { 00:15:16.679 "name": "BaseBdev4", 00:15:16.679 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:16.679 "is_configured": true, 00:15:16.679 "data_offset": 0, 00:15:16.679 "data_size": 65536 00:15:16.679 } 00:15:16.679 ] 00:15:16.679 }' 00:15:16.679 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.679 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.679 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.679 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.679 10:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.938 [2024-11-19 10:09:31.109365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:17.528 96.33 IOPS, 289.00 MiB/s [2024-11-19T10:09:31.760Z] [2024-11-19 10:09:31.567779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.788 [2024-11-19 10:09:31.779499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.788 "name": "raid_bdev1", 00:15:17.788 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:17.788 "strip_size_kb": 0, 00:15:17.788 "state": "online", 00:15:17.788 "raid_level": "raid1", 00:15:17.788 "superblock": false, 00:15:17.788 "num_base_bdevs": 4, 00:15:17.788 "num_base_bdevs_discovered": 3, 00:15:17.788 "num_base_bdevs_operational": 3, 00:15:17.788 "process": { 00:15:17.788 "type": "rebuild", 00:15:17.788 "target": "spare", 00:15:17.788 "progress": { 00:15:17.788 "blocks": 47104, 00:15:17.788 "percent": 71 00:15:17.788 } 00:15:17.788 }, 00:15:17.788 "base_bdevs_list": [ 00:15:17.788 { 00:15:17.788 "name": "spare", 00:15:17.788 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:17.788 "is_configured": true, 00:15:17.788 "data_offset": 0, 00:15:17.788 "data_size": 65536 00:15:17.788 }, 00:15:17.788 { 00:15:17.788 "name": null, 00:15:17.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.788 "is_configured": false, 00:15:17.788 "data_offset": 0, 00:15:17.788 "data_size": 65536 00:15:17.788 }, 00:15:17.788 { 00:15:17.788 "name": "BaseBdev3", 00:15:17.788 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:17.788 "is_configured": true, 00:15:17.788 "data_offset": 0, 00:15:17.788 "data_size": 65536 00:15:17.788 }, 00:15:17.788 { 00:15:17.788 "name": "BaseBdev4", 00:15:17.788 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:17.788 "is_configured": true, 00:15:17.788 "data_offset": 0, 00:15:17.788 "data_size": 65536 00:15:17.788 } 00:15:17.788 ] 00:15:17.788 }' 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.788 10:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.356 85.86 IOPS, 257.57 MiB/s [2024-11-19T10:09:32.588Z] [2024-11-19 10:09:32.567504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:18.924 [2024-11-19 10:09:32.899701] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.924 10:09:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.924 10:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.924 "name": "raid_bdev1", 00:15:18.924 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:18.924 "strip_size_kb": 0, 00:15:18.924 "state": "online", 00:15:18.924 "raid_level": "raid1", 00:15:18.924 "superblock": false, 00:15:18.924 "num_base_bdevs": 4, 00:15:18.924 "num_base_bdevs_discovered": 3, 00:15:18.924 "num_base_bdevs_operational": 3, 00:15:18.924 "process": { 00:15:18.924 "type": "rebuild", 00:15:18.924 "target": "spare", 00:15:18.924 "progress": { 00:15:18.924 "blocks": 65536, 00:15:18.924 "percent": 100 00:15:18.924 } 00:15:18.924 }, 00:15:18.924 "base_bdevs_list": [ 00:15:18.924 { 00:15:18.924 "name": "spare", 00:15:18.924 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:18.924 "is_configured": true, 00:15:18.924 "data_offset": 0, 00:15:18.924 "data_size": 65536 00:15:18.924 }, 00:15:18.924 { 00:15:18.924 "name": null, 00:15:18.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.924 "is_configured": false, 00:15:18.924 "data_offset": 0, 00:15:18.924 "data_size": 65536 00:15:18.924 }, 00:15:18.924 { 00:15:18.924 "name": "BaseBdev3", 00:15:18.924 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:18.924 "is_configured": true, 00:15:18.924 "data_offset": 0, 00:15:18.924 "data_size": 65536 00:15:18.924 }, 00:15:18.924 { 00:15:18.924 "name": "BaseBdev4", 00:15:18.924 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:18.924 "is_configured": true, 00:15:18.924 "data_offset": 0, 00:15:18.924 "data_size": 65536 00:15:18.924 } 00:15:18.924 ] 00:15:18.924 }' 00:15:18.924 10:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.924 [2024-11-19 10:09:33.007462] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:18.924 [2024-11-19 10:09:33.013747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.924 10:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.924 10:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.924 10:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.925 10:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.061 78.50 IOPS, 235.50 MiB/s [2024-11-19T10:09:34.293Z] 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.061 "name": "raid_bdev1", 00:15:20.061 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:20.061 "strip_size_kb": 0, 00:15:20.061 "state": "online", 00:15:20.061 "raid_level": "raid1", 00:15:20.061 "superblock": false, 00:15:20.061 "num_base_bdevs": 4, 00:15:20.061 "num_base_bdevs_discovered": 3, 00:15:20.061 "num_base_bdevs_operational": 3, 00:15:20.061 "base_bdevs_list": [ 00:15:20.061 { 00:15:20.061 "name": "spare", 00:15:20.061 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:20.061 "is_configured": true, 00:15:20.061 "data_offset": 0, 00:15:20.061 "data_size": 65536 00:15:20.061 }, 00:15:20.061 { 00:15:20.061 "name": null, 00:15:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.061 "is_configured": false, 00:15:20.061 "data_offset": 0, 00:15:20.061 "data_size": 65536 00:15:20.061 }, 00:15:20.061 { 00:15:20.061 "name": "BaseBdev3", 00:15:20.061 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:20.061 "is_configured": true, 00:15:20.061 "data_offset": 0, 00:15:20.061 "data_size": 65536 00:15:20.061 }, 00:15:20.061 { 00:15:20.061 "name": "BaseBdev4", 00:15:20.061 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:20.061 "is_configured": true, 00:15:20.061 "data_offset": 0, 00:15:20.061 "data_size": 65536 00:15:20.061 } 00:15:20.061 ] 00:15:20.061 }' 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:20.061 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.320 "name": "raid_bdev1", 00:15:20.320 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:20.320 "strip_size_kb": 0, 00:15:20.320 "state": "online", 00:15:20.320 "raid_level": "raid1", 00:15:20.320 "superblock": false, 00:15:20.320 "num_base_bdevs": 4, 00:15:20.320 "num_base_bdevs_discovered": 3, 00:15:20.320 "num_base_bdevs_operational": 3, 00:15:20.320 "base_bdevs_list": [ 00:15:20.320 { 00:15:20.320 "name": "spare", 00:15:20.320 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:20.320 "is_configured": true, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 }, 00:15:20.320 { 00:15:20.320 "name": null, 00:15:20.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.320 "is_configured": false, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 }, 00:15:20.320 { 00:15:20.320 "name": "BaseBdev3", 00:15:20.320 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:20.320 "is_configured": true, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 }, 00:15:20.320 { 00:15:20.320 "name": "BaseBdev4", 00:15:20.320 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:20.320 "is_configured": true, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 } 00:15:20.320 ] 00:15:20.320 }' 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.320 73.78 IOPS, 221.33 MiB/s [2024-11-19T10:09:34.552Z] 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.320 "name": "raid_bdev1", 00:15:20.320 "uuid": "c50a112e-3918-4640-9193-8e0603fabe86", 00:15:20.320 "strip_size_kb": 0, 00:15:20.320 "state": "online", 00:15:20.320 "raid_level": "raid1", 00:15:20.320 "superblock": false, 00:15:20.320 "num_base_bdevs": 4, 00:15:20.320 "num_base_bdevs_discovered": 3, 00:15:20.320 "num_base_bdevs_operational": 3, 00:15:20.320 "base_bdevs_list": [ 00:15:20.320 { 00:15:20.320 "name": "spare", 00:15:20.320 "uuid": "64ff257f-bb28-568b-929e-23f3788f38ec", 00:15:20.320 "is_configured": true, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 }, 00:15:20.320 { 00:15:20.320 "name": null, 00:15:20.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.320 "is_configured": false, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 }, 00:15:20.320 { 00:15:20.320 "name": "BaseBdev3", 00:15:20.320 "uuid": "0fb7a261-5716-5c37-8ba7-1c6518552f0d", 00:15:20.320 "is_configured": true, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 }, 00:15:20.320 { 00:15:20.320 "name": "BaseBdev4", 00:15:20.320 "uuid": "95dfe7d7-f90d-5dae-a7e0-f7eca93e597e", 00:15:20.320 "is_configured": true, 00:15:20.320 "data_offset": 0, 00:15:20.320 "data_size": 65536 00:15:20.320 } 00:15:20.320 ] 00:15:20.320 }' 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.320 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.888 10:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.888 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.888 10:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.888 [2024-11-19 10:09:35.002999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.888 [2024-11-19 10:09:35.003043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.888 00:15:20.888 Latency(us) 00:15:20.888 [2024-11-19T10:09:35.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.888 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:20.888 raid_bdev1 : 9.62 71.74 215.22 0.00 0.00 19692.96 273.69 123922.62 00:15:20.888 [2024-11-19T10:09:35.120Z] =================================================================================================================== 00:15:20.888 [2024-11-19T10:09:35.120Z] Total : 71.74 215.22 0.00 0.00 19692.96 273.69 123922.62 00:15:20.888 [2024-11-19 10:09:35.052597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.888 [2024-11-19 10:09:35.052677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.888 [2024-11-19 10:09:35.052907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.888 [2024-11-19 10:09:35.052931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:20.888 { 00:15:20.888 "results": [ 00:15:20.888 { 00:15:20.888 "job": "raid_bdev1", 00:15:20.888 "core_mask": "0x1", 00:15:20.888 "workload": "randrw", 00:15:20.888 "percentage": 50, 00:15:20.888 "status": "finished", 00:15:20.888 "queue_depth": 2, 00:15:20.888 "io_size": 3145728, 00:15:20.888 "runtime": 9.618188, 00:15:20.888 "iops": 71.73908432648645, 00:15:20.888 "mibps": 215.21725297945935, 00:15:20.888 "io_failed": 0, 00:15:20.888 "io_timeout": 0, 00:15:20.888 "avg_latency_us": 19692.96493280632, 00:15:20.888 "min_latency_us": 273.6872727272727, 00:15:20.888 "max_latency_us": 123922.61818181818 00:15:20.888 } 00:15:20.888 ], 00:15:20.889 "core_count": 1 00:15:20.889 } 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.889 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:21.471 /dev/nbd0 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.471 1+0 records in 00:15:21.471 1+0 records out 00:15:21.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627762 s, 6.5 MB/s 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.471 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:21.731 /dev/nbd1 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.731 1+0 records in 00:15:21.731 1+0 records out 00:15:21.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031426 s, 13.0 MB/s 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.731 10:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.989 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.248 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.249 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:22.507 /dev/nbd1 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.507 1+0 records in 00:15:22.507 1+0 records out 00:15:22.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320821 s, 12.8 MB/s 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.507 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.766 10:09:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.025 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79028 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79028 ']' 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79028 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79028 00:15:23.284 killing process with pid 79028 00:15:23.284 Received shutdown signal, test time was about 12.053582 seconds 00:15:23.284 00:15:23.284 Latency(us) 00:15:23.284 [2024-11-19T10:09:37.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.284 [2024-11-19T10:09:37.516Z] =================================================================================================================== 00:15:23.284 [2024-11-19T10:09:37.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79028' 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79028 00:15:23.284 10:09:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79028 00:15:23.284 [2024-11-19 10:09:37.466288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.851 [2024-11-19 10:09:37.880194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:25.256 00:15:25.256 real 0m15.912s 00:15:25.256 user 0m20.664s 00:15:25.256 sys 0m2.002s 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 ************************************ 00:15:25.256 END TEST raid_rebuild_test_io 00:15:25.256 ************************************ 00:15:25.256 10:09:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:25.256 10:09:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:25.256 10:09:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.256 10:09:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 ************************************ 00:15:25.256 START TEST raid_rebuild_test_sb_io 00:15:25.256 ************************************ 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79469 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79469 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79469 ']' 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.256 10:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 [2024-11-19 10:09:39.257361] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:25.256 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:25.256 Zero copy mechanism will not be used. 00:15:25.256 [2024-11-19 10:09:39.257560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79469 ] 00:15:25.256 [2024-11-19 10:09:39.458070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.515 [2024-11-19 10:09:39.636206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.774 [2024-11-19 10:09:39.891074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.774 [2024-11-19 10:09:39.891161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 BaseBdev1_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 [2024-11-19 10:09:40.332051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.344 [2024-11-19 10:09:40.332162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.344 [2024-11-19 10:09:40.332198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.344 [2024-11-19 10:09:40.332218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.344 [2024-11-19 10:09:40.335531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.344 [2024-11-19 10:09:40.335604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.344 BaseBdev1 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 BaseBdev2_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 [2024-11-19 10:09:40.395762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:26.344 [2024-11-19 10:09:40.395868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.344 [2024-11-19 10:09:40.395900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.344 [2024-11-19 10:09:40.395920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.344 [2024-11-19 10:09:40.398949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.344 [2024-11-19 10:09:40.398999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.344 BaseBdev2 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 BaseBdev3_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 [2024-11-19 10:09:40.463948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:26.344 [2024-11-19 10:09:40.464037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.344 [2024-11-19 10:09:40.464097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:26.344 [2024-11-19 10:09:40.464119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.344 [2024-11-19 10:09:40.467382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.344 [2024-11-19 10:09:40.467433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:26.344 BaseBdev3 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 BaseBdev4_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 [2024-11-19 10:09:40.516637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:26.344 [2024-11-19 10:09:40.516736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.344 [2024-11-19 10:09:40.516764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:26.344 [2024-11-19 10:09:40.516780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.344 [2024-11-19 10:09:40.519753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.344 [2024-11-19 10:09:40.519830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:26.344 BaseBdev4 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 spare_malloc 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.344 spare_delay 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.344 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.604 [2024-11-19 10:09:40.576392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.604 [2024-11-19 10:09:40.576472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.604 [2024-11-19 10:09:40.576503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:26.604 [2024-11-19 10:09:40.576519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.604 [2024-11-19 10:09:40.579734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.604 [2024-11-19 10:09:40.579810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.604 spare 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.604 [2024-11-19 10:09:40.584606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.604 [2024-11-19 10:09:40.587170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.604 [2024-11-19 10:09:40.587297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.604 [2024-11-19 10:09:40.587390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.604 [2024-11-19 10:09:40.587669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:26.604 [2024-11-19 10:09:40.587715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:26.604 [2024-11-19 10:09:40.588146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:26.604 [2024-11-19 10:09:40.588411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:26.604 [2024-11-19 10:09:40.588445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:26.604 [2024-11-19 10:09:40.588742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.604 "name": "raid_bdev1", 00:15:26.604 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:26.604 "strip_size_kb": 0, 00:15:26.604 "state": "online", 00:15:26.604 "raid_level": "raid1", 00:15:26.604 "superblock": true, 00:15:26.604 "num_base_bdevs": 4, 00:15:26.604 "num_base_bdevs_discovered": 4, 00:15:26.604 "num_base_bdevs_operational": 4, 00:15:26.604 "base_bdevs_list": [ 00:15:26.604 { 00:15:26.604 "name": "BaseBdev1", 00:15:26.604 "uuid": "ea1d824c-8e0c-5c2f-9625-6dc437c84d61", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 }, 00:15:26.604 { 00:15:26.604 "name": "BaseBdev2", 00:15:26.604 "uuid": "867c63b2-b49f-5b5c-85c7-330bb3e6535d", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 }, 00:15:26.604 { 00:15:26.604 "name": "BaseBdev3", 00:15:26.604 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 }, 00:15:26.604 { 00:15:26.604 "name": "BaseBdev4", 00:15:26.604 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 } 00:15:26.604 ] 00:15:26.604 }' 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.604 10:09:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.173 [2024-11-19 10:09:41.121464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 [2024-11-19 10:09:41.220890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.174 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.174 "name": "raid_bdev1", 00:15:27.174 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:27.174 "strip_size_kb": 0, 00:15:27.174 "state": "online", 00:15:27.174 "raid_level": "raid1", 00:15:27.174 "superblock": true, 00:15:27.174 "num_base_bdevs": 4, 00:15:27.174 "num_base_bdevs_discovered": 3, 00:15:27.174 "num_base_bdevs_operational": 3, 00:15:27.174 "base_bdevs_list": [ 00:15:27.174 { 00:15:27.174 "name": null, 00:15:27.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.174 "is_configured": false, 00:15:27.174 "data_offset": 0, 00:15:27.174 "data_size": 63488 00:15:27.174 }, 00:15:27.174 { 00:15:27.174 "name": "BaseBdev2", 00:15:27.174 "uuid": "867c63b2-b49f-5b5c-85c7-330bb3e6535d", 00:15:27.174 "is_configured": true, 00:15:27.174 "data_offset": 2048, 00:15:27.174 "data_size": 63488 00:15:27.174 }, 00:15:27.174 { 00:15:27.174 "name": "BaseBdev3", 00:15:27.174 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:27.174 "is_configured": true, 00:15:27.174 "data_offset": 2048, 00:15:27.174 "data_size": 63488 00:15:27.174 }, 00:15:27.174 { 00:15:27.174 "name": "BaseBdev4", 00:15:27.174 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:27.174 "is_configured": true, 00:15:27.174 "data_offset": 2048, 00:15:27.174 "data_size": 63488 00:15:27.174 } 00:15:27.174 ] 00:15:27.174 }' 00:15:27.174 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.174 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.174 [2024-11-19 10:09:41.329324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:27.174 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:27.174 Zero copy mechanism will not be used. 00:15:27.174 Running I/O for 60 seconds... 00:15:27.742 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.742 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.742 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.742 [2024-11-19 10:09:41.762285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.742 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.742 10:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:27.742 [2024-11-19 10:09:41.864028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:27.742 [2024-11-19 10:09:41.867241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.002 [2024-11-19 10:09:42.000178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:28.002 [2024-11-19 10:09:42.002459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:28.002 [2024-11-19 10:09:42.233714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:28.002 [2024-11-19 10:09:42.234290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:28.521 122.00 IOPS, 366.00 MiB/s [2024-11-19T10:09:42.753Z] [2024-11-19 10:09:42.592355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:28.835 [2024-11-19 10:09:42.756959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.835 "name": "raid_bdev1", 00:15:28.835 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:28.835 "strip_size_kb": 0, 00:15:28.835 "state": "online", 00:15:28.835 "raid_level": "raid1", 00:15:28.835 "superblock": true, 00:15:28.835 "num_base_bdevs": 4, 00:15:28.835 "num_base_bdevs_discovered": 4, 00:15:28.835 "num_base_bdevs_operational": 4, 00:15:28.835 "process": { 00:15:28.835 "type": "rebuild", 00:15:28.835 "target": "spare", 00:15:28.835 "progress": { 00:15:28.835 "blocks": 10240, 00:15:28.835 "percent": 16 00:15:28.835 } 00:15:28.835 }, 00:15:28.835 "base_bdevs_list": [ 00:15:28.835 { 00:15:28.835 "name": "spare", 00:15:28.835 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:28.835 "is_configured": true, 00:15:28.835 "data_offset": 2048, 00:15:28.835 "data_size": 63488 00:15:28.835 }, 00:15:28.835 { 00:15:28.835 "name": "BaseBdev2", 00:15:28.835 "uuid": "867c63b2-b49f-5b5c-85c7-330bb3e6535d", 00:15:28.835 "is_configured": true, 00:15:28.835 "data_offset": 2048, 00:15:28.835 "data_size": 63488 00:15:28.835 }, 00:15:28.835 { 00:15:28.835 "name": "BaseBdev3", 00:15:28.835 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:28.835 "is_configured": true, 00:15:28.835 "data_offset": 2048, 00:15:28.835 "data_size": 63488 00:15:28.835 }, 00:15:28.835 { 00:15:28.835 "name": "BaseBdev4", 00:15:28.835 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:28.835 "is_configured": true, 00:15:28.835 "data_offset": 2048, 00:15:28.835 "data_size": 63488 00:15:28.835 } 00:15:28.835 ] 00:15:28.835 }' 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.835 [2024-11-19 10:09:42.967609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:28.835 [2024-11-19 10:09:42.968595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.835 10:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.835 [2024-11-19 10:09:43.005790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.108 [2024-11-19 10:09:43.113829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:29.108 [2024-11-19 10:09:43.218362] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:29.108 [2024-11-19 10:09:43.232389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.108 [2024-11-19 10:09:43.232452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.108 [2024-11-19 10:09:43.232468] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:29.108 [2024-11-19 10:09:43.268642] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.108 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.368 93.00 IOPS, 279.00 MiB/s [2024-11-19T10:09:43.601Z] 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.369 "name": "raid_bdev1", 00:15:29.369 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:29.369 "strip_size_kb": 0, 00:15:29.369 "state": "online", 00:15:29.369 "raid_level": "raid1", 00:15:29.369 "superblock": true, 00:15:29.369 "num_base_bdevs": 4, 00:15:29.369 "num_base_bdevs_discovered": 3, 00:15:29.369 "num_base_bdevs_operational": 3, 00:15:29.369 "base_bdevs_list": [ 00:15:29.369 { 00:15:29.369 "name": null, 00:15:29.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.369 "is_configured": false, 00:15:29.369 "data_offset": 0, 00:15:29.369 "data_size": 63488 00:15:29.369 }, 00:15:29.369 { 00:15:29.369 "name": "BaseBdev2", 00:15:29.369 "uuid": "867c63b2-b49f-5b5c-85c7-330bb3e6535d", 00:15:29.369 "is_configured": true, 00:15:29.369 "data_offset": 2048, 00:15:29.369 "data_size": 63488 00:15:29.369 }, 00:15:29.369 { 00:15:29.369 "name": "BaseBdev3", 00:15:29.369 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:29.369 "is_configured": true, 00:15:29.369 "data_offset": 2048, 00:15:29.369 "data_size": 63488 00:15:29.369 }, 00:15:29.369 { 00:15:29.369 "name": "BaseBdev4", 00:15:29.369 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:29.369 "is_configured": true, 00:15:29.369 "data_offset": 2048, 00:15:29.369 "data_size": 63488 00:15:29.369 } 00:15:29.369 ] 00:15:29.369 }' 00:15:29.369 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.369 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.628 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.887 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.887 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.887 "name": "raid_bdev1", 00:15:29.887 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:29.887 "strip_size_kb": 0, 00:15:29.887 "state": "online", 00:15:29.887 "raid_level": "raid1", 00:15:29.887 "superblock": true, 00:15:29.887 "num_base_bdevs": 4, 00:15:29.887 "num_base_bdevs_discovered": 3, 00:15:29.887 "num_base_bdevs_operational": 3, 00:15:29.887 "base_bdevs_list": [ 00:15:29.887 { 00:15:29.887 "name": null, 00:15:29.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.887 "is_configured": false, 00:15:29.887 "data_offset": 0, 00:15:29.887 "data_size": 63488 00:15:29.887 }, 00:15:29.887 { 00:15:29.887 "name": "BaseBdev2", 00:15:29.887 "uuid": "867c63b2-b49f-5b5c-85c7-330bb3e6535d", 00:15:29.887 "is_configured": true, 00:15:29.887 "data_offset": 2048, 00:15:29.887 "data_size": 63488 00:15:29.887 }, 00:15:29.887 { 00:15:29.887 "name": "BaseBdev3", 00:15:29.887 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:29.887 "is_configured": true, 00:15:29.887 "data_offset": 2048, 00:15:29.887 "data_size": 63488 00:15:29.887 }, 00:15:29.887 { 00:15:29.887 "name": "BaseBdev4", 00:15:29.887 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:29.887 "is_configured": true, 00:15:29.887 "data_offset": 2048, 00:15:29.887 "data_size": 63488 00:15:29.887 } 00:15:29.887 ] 00:15:29.887 }' 00:15:29.887 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.887 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.887 10:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.887 10:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.887 10:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.887 10:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.887 10:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.887 [2024-11-19 10:09:44.026188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.887 10:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.887 10:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:29.887 [2024-11-19 10:09:44.088124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:29.887 [2024-11-19 10:09:44.091050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.146 124.00 IOPS, 372.00 MiB/s [2024-11-19T10:09:44.378Z] [2024-11-19 10:09:44.347305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:30.146 [2024-11-19 10:09:44.348476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:30.714 [2024-11-19 10:09:44.775645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.974 "name": "raid_bdev1", 00:15:30.974 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:30.974 "strip_size_kb": 0, 00:15:30.974 "state": "online", 00:15:30.974 "raid_level": "raid1", 00:15:30.974 "superblock": true, 00:15:30.974 "num_base_bdevs": 4, 00:15:30.974 "num_base_bdevs_discovered": 4, 00:15:30.974 "num_base_bdevs_operational": 4, 00:15:30.974 "process": { 00:15:30.974 "type": "rebuild", 00:15:30.974 "target": "spare", 00:15:30.974 "progress": { 00:15:30.974 "blocks": 12288, 00:15:30.974 "percent": 19 00:15:30.974 } 00:15:30.974 }, 00:15:30.974 "base_bdevs_list": [ 00:15:30.974 { 00:15:30.974 "name": "spare", 00:15:30.974 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:30.974 "is_configured": true, 00:15:30.974 "data_offset": 2048, 00:15:30.974 "data_size": 63488 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "name": "BaseBdev2", 00:15:30.974 "uuid": "867c63b2-b49f-5b5c-85c7-330bb3e6535d", 00:15:30.974 "is_configured": true, 00:15:30.974 "data_offset": 2048, 00:15:30.974 "data_size": 63488 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "name": "BaseBdev3", 00:15:30.974 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:30.974 "is_configured": true, 00:15:30.974 "data_offset": 2048, 00:15:30.974 "data_size": 63488 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "name": "BaseBdev4", 00:15:30.974 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:30.974 "is_configured": true, 00:15:30.974 "data_offset": 2048, 00:15:30.974 "data_size": 63488 00:15:30.974 } 00:15:30.974 ] 00:15:30.974 }' 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.974 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:31.233 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.233 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.233 [2024-11-19 10:09:45.236723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:31.492 118.50 IOPS, 355.50 MiB/s [2024-11-19T10:09:45.724Z] [2024-11-19 10:09:45.520057] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:31.492 [2024-11-19 10:09:45.520176] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.492 "name": "raid_bdev1", 00:15:31.492 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:31.492 "strip_size_kb": 0, 00:15:31.492 "state": "online", 00:15:31.492 "raid_level": "raid1", 00:15:31.492 "superblock": true, 00:15:31.492 "num_base_bdevs": 4, 00:15:31.492 "num_base_bdevs_discovered": 3, 00:15:31.492 "num_base_bdevs_operational": 3, 00:15:31.492 "process": { 00:15:31.492 "type": "rebuild", 00:15:31.492 "target": "spare", 00:15:31.492 "progress": { 00:15:31.492 "blocks": 16384, 00:15:31.492 "percent": 25 00:15:31.492 } 00:15:31.492 }, 00:15:31.492 "base_bdevs_list": [ 00:15:31.492 { 00:15:31.492 "name": "spare", 00:15:31.492 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:31.492 "is_configured": true, 00:15:31.492 "data_offset": 2048, 00:15:31.492 "data_size": 63488 00:15:31.492 }, 00:15:31.492 { 00:15:31.492 "name": null, 00:15:31.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.492 "is_configured": false, 00:15:31.492 "data_offset": 0, 00:15:31.492 "data_size": 63488 00:15:31.492 }, 00:15:31.492 { 00:15:31.492 "name": "BaseBdev3", 00:15:31.492 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:31.492 "is_configured": true, 00:15:31.492 "data_offset": 2048, 00:15:31.492 "data_size": 63488 00:15:31.492 }, 00:15:31.492 { 00:15:31.492 "name": "BaseBdev4", 00:15:31.492 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:31.492 "is_configured": true, 00:15:31.492 "data_offset": 2048, 00:15:31.492 "data_size": 63488 00:15:31.492 } 00:15:31.492 ] 00:15:31.492 }' 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.492 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.493 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.493 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.493 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.493 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.493 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.493 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.751 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.751 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.751 "name": "raid_bdev1", 00:15:31.751 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:31.751 "strip_size_kb": 0, 00:15:31.751 "state": "online", 00:15:31.751 "raid_level": "raid1", 00:15:31.751 "superblock": true, 00:15:31.751 "num_base_bdevs": 4, 00:15:31.751 "num_base_bdevs_discovered": 3, 00:15:31.751 "num_base_bdevs_operational": 3, 00:15:31.751 "process": { 00:15:31.751 "type": "rebuild", 00:15:31.751 "target": "spare", 00:15:31.751 "progress": { 00:15:31.751 "blocks": 18432, 00:15:31.751 "percent": 29 00:15:31.751 } 00:15:31.751 }, 00:15:31.751 "base_bdevs_list": [ 00:15:31.751 { 00:15:31.751 "name": "spare", 00:15:31.751 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:31.751 "is_configured": true, 00:15:31.751 "data_offset": 2048, 00:15:31.751 "data_size": 63488 00:15:31.751 }, 00:15:31.751 { 00:15:31.751 "name": null, 00:15:31.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.751 "is_configured": false, 00:15:31.751 "data_offset": 0, 00:15:31.751 "data_size": 63488 00:15:31.751 }, 00:15:31.751 { 00:15:31.751 "name": "BaseBdev3", 00:15:31.751 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:31.751 "is_configured": true, 00:15:31.751 "data_offset": 2048, 00:15:31.751 "data_size": 63488 00:15:31.751 }, 00:15:31.751 { 00:15:31.751 "name": "BaseBdev4", 00:15:31.751 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:31.751 "is_configured": true, 00:15:31.751 "data_offset": 2048, 00:15:31.751 "data_size": 63488 00:15:31.751 } 00:15:31.751 ] 00:15:31.752 }' 00:15:31.752 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.752 [2024-11-19 10:09:45.790110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:31.752 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.752 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.752 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.752 10:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.752 [2024-11-19 10:09:45.910673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:32.011 [2024-11-19 10:09:46.210196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:32.270 109.40 IOPS, 328.20 MiB/s [2024-11-19T10:09:46.502Z] [2024-11-19 10:09:46.354459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:32.528 [2024-11-19 10:09:46.705396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.788 [2024-11-19 10:09:46.910327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:32.788 [2024-11-19 10:09:46.911313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.788 "name": "raid_bdev1", 00:15:32.788 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:32.788 "strip_size_kb": 0, 00:15:32.788 "state": "online", 00:15:32.788 "raid_level": "raid1", 00:15:32.788 "superblock": true, 00:15:32.788 "num_base_bdevs": 4, 00:15:32.788 "num_base_bdevs_discovered": 3, 00:15:32.788 "num_base_bdevs_operational": 3, 00:15:32.788 "process": { 00:15:32.788 "type": "rebuild", 00:15:32.788 "target": "spare", 00:15:32.788 "progress": { 00:15:32.788 "blocks": 36864, 00:15:32.788 "percent": 58 00:15:32.788 } 00:15:32.788 }, 00:15:32.788 "base_bdevs_list": [ 00:15:32.788 { 00:15:32.788 "name": "spare", 00:15:32.788 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:32.788 "is_configured": true, 00:15:32.788 "data_offset": 2048, 00:15:32.788 "data_size": 63488 00:15:32.788 }, 00:15:32.788 { 00:15:32.788 "name": null, 00:15:32.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.788 "is_configured": false, 00:15:32.788 "data_offset": 0, 00:15:32.788 "data_size": 63488 00:15:32.788 }, 00:15:32.788 { 00:15:32.788 "name": "BaseBdev3", 00:15:32.788 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:32.788 "is_configured": true, 00:15:32.788 "data_offset": 2048, 00:15:32.788 "data_size": 63488 00:15:32.788 }, 00:15:32.788 { 00:15:32.788 "name": "BaseBdev4", 00:15:32.788 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:32.788 "is_configured": true, 00:15:32.788 "data_offset": 2048, 00:15:32.788 "data_size": 63488 00:15:32.788 } 00:15:32.788 ] 00:15:32.788 }' 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.788 10:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.047 10:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.047 10:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.047 [2024-11-19 10:09:47.114912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:33.565 99.00 IOPS, 297.00 MiB/s [2024-11-19T10:09:47.797Z] [2024-11-19 10:09:47.691884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.824 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.083 "name": "raid_bdev1", 00:15:34.083 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:34.083 "strip_size_kb": 0, 00:15:34.083 "state": "online", 00:15:34.083 "raid_level": "raid1", 00:15:34.083 "superblock": true, 00:15:34.083 "num_base_bdevs": 4, 00:15:34.083 "num_base_bdevs_discovered": 3, 00:15:34.083 "num_base_bdevs_operational": 3, 00:15:34.083 "process": { 00:15:34.083 "type": "rebuild", 00:15:34.083 "target": "spare", 00:15:34.083 "progress": { 00:15:34.083 "blocks": 53248, 00:15:34.083 "percent": 83 00:15:34.083 } 00:15:34.083 }, 00:15:34.083 "base_bdevs_list": [ 00:15:34.083 { 00:15:34.083 "name": "spare", 00:15:34.083 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:34.083 "is_configured": true, 00:15:34.083 "data_offset": 2048, 00:15:34.083 "data_size": 63488 00:15:34.083 }, 00:15:34.083 { 00:15:34.083 "name": null, 00:15:34.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.083 "is_configured": false, 00:15:34.083 "data_offset": 0, 00:15:34.083 "data_size": 63488 00:15:34.083 }, 00:15:34.083 { 00:15:34.083 "name": "BaseBdev3", 00:15:34.083 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:34.083 "is_configured": true, 00:15:34.083 "data_offset": 2048, 00:15:34.083 "data_size": 63488 00:15:34.083 }, 00:15:34.083 { 00:15:34.083 "name": "BaseBdev4", 00:15:34.083 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:34.083 "is_configured": true, 00:15:34.083 "data_offset": 2048, 00:15:34.083 "data_size": 63488 00:15:34.083 } 00:15:34.083 ] 00:15:34.083 }' 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.083 10:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.083 [2024-11-19 10:09:48.269435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:34.601 90.57 IOPS, 271.71 MiB/s [2024-11-19T10:09:48.833Z] [2024-11-19 10:09:48.606678] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:34.601 [2024-11-19 10:09:48.714762] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:34.601 [2024-11-19 10:09:48.721191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.168 "name": "raid_bdev1", 00:15:35.168 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:35.168 "strip_size_kb": 0, 00:15:35.168 "state": "online", 00:15:35.168 "raid_level": "raid1", 00:15:35.168 "superblock": true, 00:15:35.168 "num_base_bdevs": 4, 00:15:35.168 "num_base_bdevs_discovered": 3, 00:15:35.168 "num_base_bdevs_operational": 3, 00:15:35.168 "base_bdevs_list": [ 00:15:35.168 { 00:15:35.168 "name": "spare", 00:15:35.168 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:35.168 "is_configured": true, 00:15:35.168 "data_offset": 2048, 00:15:35.168 "data_size": 63488 00:15:35.168 }, 00:15:35.168 { 00:15:35.168 "name": null, 00:15:35.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.168 "is_configured": false, 00:15:35.168 "data_offset": 0, 00:15:35.168 "data_size": 63488 00:15:35.168 }, 00:15:35.168 { 00:15:35.168 "name": "BaseBdev3", 00:15:35.168 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:35.168 "is_configured": true, 00:15:35.168 "data_offset": 2048, 00:15:35.168 "data_size": 63488 00:15:35.168 }, 00:15:35.168 { 00:15:35.168 "name": "BaseBdev4", 00:15:35.168 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:35.168 "is_configured": true, 00:15:35.168 "data_offset": 2048, 00:15:35.168 "data_size": 63488 00:15:35.168 } 00:15:35.168 ] 00:15:35.168 }' 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.168 82.50 IOPS, 247.50 MiB/s [2024-11-19T10:09:49.400Z] 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.168 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.427 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.427 "name": "raid_bdev1", 00:15:35.427 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:35.427 "strip_size_kb": 0, 00:15:35.427 "state": "online", 00:15:35.427 "raid_level": "raid1", 00:15:35.427 "superblock": true, 00:15:35.427 "num_base_bdevs": 4, 00:15:35.427 "num_base_bdevs_discovered": 3, 00:15:35.427 "num_base_bdevs_operational": 3, 00:15:35.427 "base_bdevs_list": [ 00:15:35.427 { 00:15:35.427 "name": "spare", 00:15:35.427 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:35.427 "is_configured": true, 00:15:35.427 "data_offset": 2048, 00:15:35.427 "data_size": 63488 00:15:35.427 }, 00:15:35.427 { 00:15:35.427 "name": null, 00:15:35.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.427 "is_configured": false, 00:15:35.427 "data_offset": 0, 00:15:35.427 "data_size": 63488 00:15:35.428 }, 00:15:35.428 { 00:15:35.428 "name": "BaseBdev3", 00:15:35.428 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:35.428 "is_configured": true, 00:15:35.428 "data_offset": 2048, 00:15:35.428 "data_size": 63488 00:15:35.428 }, 00:15:35.428 { 00:15:35.428 "name": "BaseBdev4", 00:15:35.428 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:35.428 "is_configured": true, 00:15:35.428 "data_offset": 2048, 00:15:35.428 "data_size": 63488 00:15:35.428 } 00:15:35.428 ] 00:15:35.428 }' 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.428 "name": "raid_bdev1", 00:15:35.428 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:35.428 "strip_size_kb": 0, 00:15:35.428 "state": "online", 00:15:35.428 "raid_level": "raid1", 00:15:35.428 "superblock": true, 00:15:35.428 "num_base_bdevs": 4, 00:15:35.428 "num_base_bdevs_discovered": 3, 00:15:35.428 "num_base_bdevs_operational": 3, 00:15:35.428 "base_bdevs_list": [ 00:15:35.428 { 00:15:35.428 "name": "spare", 00:15:35.428 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:35.428 "is_configured": true, 00:15:35.428 "data_offset": 2048, 00:15:35.428 "data_size": 63488 00:15:35.428 }, 00:15:35.428 { 00:15:35.428 "name": null, 00:15:35.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.428 "is_configured": false, 00:15:35.428 "data_offset": 0, 00:15:35.428 "data_size": 63488 00:15:35.428 }, 00:15:35.428 { 00:15:35.428 "name": "BaseBdev3", 00:15:35.428 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:35.428 "is_configured": true, 00:15:35.428 "data_offset": 2048, 00:15:35.428 "data_size": 63488 00:15:35.428 }, 00:15:35.428 { 00:15:35.428 "name": "BaseBdev4", 00:15:35.428 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:35.428 "is_configured": true, 00:15:35.428 "data_offset": 2048, 00:15:35.428 "data_size": 63488 00:15:35.428 } 00:15:35.428 ] 00:15:35.428 }' 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.428 10:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.996 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.996 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.996 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.996 [2024-11-19 10:09:50.028116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.996 [2024-11-19 10:09:50.028163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.996 00:15:35.996 Latency(us) 00:15:35.996 [2024-11-19T10:09:50.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.996 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:35.996 raid_bdev1 : 8.73 78.60 235.80 0.00 0.00 17799.06 277.41 125829.12 00:15:35.996 [2024-11-19T10:09:50.228Z] =================================================================================================================== 00:15:35.996 [2024-11-19T10:09:50.228Z] Total : 78.60 235.80 0.00 0.00 17799.06 277.41 125829.12 00:15:35.996 [2024-11-19 10:09:50.080512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.996 [2024-11-19 10:09:50.080598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.997 [2024-11-19 10:09:50.080750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.997 [2024-11-19 10:09:50.080772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:35.997 { 00:15:35.997 "results": [ 00:15:35.997 { 00:15:35.997 "job": "raid_bdev1", 00:15:35.997 "core_mask": "0x1", 00:15:35.997 "workload": "randrw", 00:15:35.997 "percentage": 50, 00:15:35.997 "status": "finished", 00:15:35.997 "queue_depth": 2, 00:15:35.997 "io_size": 3145728, 00:15:35.997 "runtime": 8.727729, 00:15:35.997 "iops": 78.60005735741795, 00:15:35.997 "mibps": 235.80017207225387, 00:15:35.997 "io_failed": 0, 00:15:35.997 "io_timeout": 0, 00:15:35.997 "avg_latency_us": 17799.057810760667, 00:15:35.997 "min_latency_us": 277.4109090909091, 00:15:35.997 "max_latency_us": 125829.12 00:15:35.997 } 00:15:35.997 ], 00:15:35.997 "core_count": 1 00:15:35.997 } 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:35.997 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:36.256 /dev/nbd0 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.515 1+0 records in 00:15:36.515 1+0 records out 00:15:36.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563621 s, 7.3 MB/s 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:36.515 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:36.774 /dev/nbd1 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.774 1+0 records in 00:15:36.774 1+0 records out 00:15:36.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00412862 s, 992 kB/s 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:36.774 10:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.033 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.292 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:37.552 /dev/nbd1 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.552 1+0 records in 00:15:37.552 1+0 records out 00:15:37.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365231 s, 11.2 MB/s 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.552 10:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.120 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.378 [2024-11-19 10:09:52.430128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.378 [2024-11-19 10:09:52.430214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.378 [2024-11-19 10:09:52.430249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:38.378 [2024-11-19 10:09:52.430267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.378 [2024-11-19 10:09:52.433448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.378 [2024-11-19 10:09:52.433499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.378 [2024-11-19 10:09:52.433630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:38.378 [2024-11-19 10:09:52.433717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.378 [2024-11-19 10:09:52.433939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.378 [2024-11-19 10:09:52.434098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.378 spare 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:38.378 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.379 [2024-11-19 10:09:52.534310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:38.379 [2024-11-19 10:09:52.534410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:38.379 [2024-11-19 10:09:52.534948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:38.379 [2024-11-19 10:09:52.535260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:38.379 [2024-11-19 10:09:52.535294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:38.379 [2024-11-19 10:09:52.535563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.379 "name": "raid_bdev1", 00:15:38.379 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:38.379 "strip_size_kb": 0, 00:15:38.379 "state": "online", 00:15:38.379 "raid_level": "raid1", 00:15:38.379 "superblock": true, 00:15:38.379 "num_base_bdevs": 4, 00:15:38.379 "num_base_bdevs_discovered": 3, 00:15:38.379 "num_base_bdevs_operational": 3, 00:15:38.379 "base_bdevs_list": [ 00:15:38.379 { 00:15:38.379 "name": "spare", 00:15:38.379 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:38.379 "is_configured": true, 00:15:38.379 "data_offset": 2048, 00:15:38.379 "data_size": 63488 00:15:38.379 }, 00:15:38.379 { 00:15:38.379 "name": null, 00:15:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.379 "is_configured": false, 00:15:38.379 "data_offset": 2048, 00:15:38.379 "data_size": 63488 00:15:38.379 }, 00:15:38.379 { 00:15:38.379 "name": "BaseBdev3", 00:15:38.379 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:38.379 "is_configured": true, 00:15:38.379 "data_offset": 2048, 00:15:38.379 "data_size": 63488 00:15:38.379 }, 00:15:38.379 { 00:15:38.379 "name": "BaseBdev4", 00:15:38.379 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:38.379 "is_configured": true, 00:15:38.379 "data_offset": 2048, 00:15:38.379 "data_size": 63488 00:15:38.379 } 00:15:38.379 ] 00:15:38.379 }' 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.379 10:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.946 "name": "raid_bdev1", 00:15:38.946 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:38.946 "strip_size_kb": 0, 00:15:38.946 "state": "online", 00:15:38.946 "raid_level": "raid1", 00:15:38.946 "superblock": true, 00:15:38.946 "num_base_bdevs": 4, 00:15:38.946 "num_base_bdevs_discovered": 3, 00:15:38.946 "num_base_bdevs_operational": 3, 00:15:38.946 "base_bdevs_list": [ 00:15:38.946 { 00:15:38.946 "name": "spare", 00:15:38.946 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:38.946 "is_configured": true, 00:15:38.946 "data_offset": 2048, 00:15:38.946 "data_size": 63488 00:15:38.946 }, 00:15:38.946 { 00:15:38.946 "name": null, 00:15:38.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.946 "is_configured": false, 00:15:38.946 "data_offset": 2048, 00:15:38.946 "data_size": 63488 00:15:38.946 }, 00:15:38.946 { 00:15:38.946 "name": "BaseBdev3", 00:15:38.946 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:38.946 "is_configured": true, 00:15:38.946 "data_offset": 2048, 00:15:38.946 "data_size": 63488 00:15:38.946 }, 00:15:38.946 { 00:15:38.946 "name": "BaseBdev4", 00:15:38.946 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:38.946 "is_configured": true, 00:15:38.946 "data_offset": 2048, 00:15:38.946 "data_size": 63488 00:15:38.946 } 00:15:38.946 ] 00:15:38.946 }' 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.946 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.205 [2024-11-19 10:09:53.274638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.205 "name": "raid_bdev1", 00:15:39.205 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:39.205 "strip_size_kb": 0, 00:15:39.205 "state": "online", 00:15:39.205 "raid_level": "raid1", 00:15:39.205 "superblock": true, 00:15:39.205 "num_base_bdevs": 4, 00:15:39.205 "num_base_bdevs_discovered": 2, 00:15:39.205 "num_base_bdevs_operational": 2, 00:15:39.205 "base_bdevs_list": [ 00:15:39.205 { 00:15:39.205 "name": null, 00:15:39.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.205 "is_configured": false, 00:15:39.205 "data_offset": 0, 00:15:39.205 "data_size": 63488 00:15:39.205 }, 00:15:39.205 { 00:15:39.205 "name": null, 00:15:39.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.205 "is_configured": false, 00:15:39.205 "data_offset": 2048, 00:15:39.205 "data_size": 63488 00:15:39.205 }, 00:15:39.205 { 00:15:39.205 "name": "BaseBdev3", 00:15:39.205 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:39.205 "is_configured": true, 00:15:39.205 "data_offset": 2048, 00:15:39.205 "data_size": 63488 00:15:39.205 }, 00:15:39.205 { 00:15:39.205 "name": "BaseBdev4", 00:15:39.205 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:39.205 "is_configured": true, 00:15:39.205 "data_offset": 2048, 00:15:39.205 "data_size": 63488 00:15:39.205 } 00:15:39.205 ] 00:15:39.205 }' 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.205 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.772 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.772 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.772 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.772 [2024-11-19 10:09:53.806916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.772 [2024-11-19 10:09:53.807227] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:39.772 [2024-11-19 10:09:53.807270] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:39.772 [2024-11-19 10:09:53.807324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.772 [2024-11-19 10:09:53.821729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:39.772 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.772 10:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:39.772 [2024-11-19 10:09:53.824553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.726 "name": "raid_bdev1", 00:15:40.726 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:40.726 "strip_size_kb": 0, 00:15:40.726 "state": "online", 00:15:40.726 "raid_level": "raid1", 00:15:40.726 "superblock": true, 00:15:40.726 "num_base_bdevs": 4, 00:15:40.726 "num_base_bdevs_discovered": 3, 00:15:40.726 "num_base_bdevs_operational": 3, 00:15:40.726 "process": { 00:15:40.726 "type": "rebuild", 00:15:40.726 "target": "spare", 00:15:40.726 "progress": { 00:15:40.726 "blocks": 18432, 00:15:40.726 "percent": 29 00:15:40.726 } 00:15:40.726 }, 00:15:40.726 "base_bdevs_list": [ 00:15:40.726 { 00:15:40.726 "name": "spare", 00:15:40.726 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:40.726 "is_configured": true, 00:15:40.726 "data_offset": 2048, 00:15:40.726 "data_size": 63488 00:15:40.726 }, 00:15:40.726 { 00:15:40.726 "name": null, 00:15:40.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.726 "is_configured": false, 00:15:40.726 "data_offset": 2048, 00:15:40.726 "data_size": 63488 00:15:40.726 }, 00:15:40.726 { 00:15:40.726 "name": "BaseBdev3", 00:15:40.726 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:40.726 "is_configured": true, 00:15:40.726 "data_offset": 2048, 00:15:40.726 "data_size": 63488 00:15:40.726 }, 00:15:40.726 { 00:15:40.726 "name": "BaseBdev4", 00:15:40.726 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:40.726 "is_configured": true, 00:15:40.726 "data_offset": 2048, 00:15:40.726 "data_size": 63488 00:15:40.726 } 00:15:40.726 ] 00:15:40.726 }' 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.726 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.984 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.984 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.985 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.985 10:09:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.985 [2024-11-19 10:09:54.998826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.985 [2024-11-19 10:09:55.036956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.985 [2024-11-19 10:09:55.037107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.985 [2024-11-19 10:09:55.037135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.985 [2024-11-19 10:09:55.037152] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.985 "name": "raid_bdev1", 00:15:40.985 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:40.985 "strip_size_kb": 0, 00:15:40.985 "state": "online", 00:15:40.985 "raid_level": "raid1", 00:15:40.985 "superblock": true, 00:15:40.985 "num_base_bdevs": 4, 00:15:40.985 "num_base_bdevs_discovered": 2, 00:15:40.985 "num_base_bdevs_operational": 2, 00:15:40.985 "base_bdevs_list": [ 00:15:40.985 { 00:15:40.985 "name": null, 00:15:40.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.985 "is_configured": false, 00:15:40.985 "data_offset": 0, 00:15:40.985 "data_size": 63488 00:15:40.985 }, 00:15:40.985 { 00:15:40.985 "name": null, 00:15:40.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.985 "is_configured": false, 00:15:40.985 "data_offset": 2048, 00:15:40.985 "data_size": 63488 00:15:40.985 }, 00:15:40.985 { 00:15:40.985 "name": "BaseBdev3", 00:15:40.985 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:40.985 "is_configured": true, 00:15:40.985 "data_offset": 2048, 00:15:40.985 "data_size": 63488 00:15:40.985 }, 00:15:40.985 { 00:15:40.985 "name": "BaseBdev4", 00:15:40.985 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:40.985 "is_configured": true, 00:15:40.985 "data_offset": 2048, 00:15:40.985 "data_size": 63488 00:15:40.985 } 00:15:40.985 ] 00:15:40.985 }' 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.985 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.551 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.551 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.551 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.551 [2024-11-19 10:09:55.590488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.551 [2024-11-19 10:09:55.590587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.551 [2024-11-19 10:09:55.590627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:41.551 [2024-11-19 10:09:55.590647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.551 [2024-11-19 10:09:55.591344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.551 [2024-11-19 10:09:55.591387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.551 [2024-11-19 10:09:55.591489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:41.551 [2024-11-19 10:09:55.591514] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:41.551 [2024-11-19 10:09:55.591538] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:41.551 [2024-11-19 10:09:55.591603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.551 [2024-11-19 10:09:55.606415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:41.551 spare 00:15:41.551 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.551 10:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:41.551 [2024-11-19 10:09:55.609284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.484 "name": "raid_bdev1", 00:15:42.484 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:42.484 "strip_size_kb": 0, 00:15:42.484 "state": "online", 00:15:42.484 "raid_level": "raid1", 00:15:42.484 "superblock": true, 00:15:42.484 "num_base_bdevs": 4, 00:15:42.484 "num_base_bdevs_discovered": 3, 00:15:42.484 "num_base_bdevs_operational": 3, 00:15:42.484 "process": { 00:15:42.484 "type": "rebuild", 00:15:42.484 "target": "spare", 00:15:42.484 "progress": { 00:15:42.484 "blocks": 20480, 00:15:42.484 "percent": 32 00:15:42.484 } 00:15:42.484 }, 00:15:42.484 "base_bdevs_list": [ 00:15:42.484 { 00:15:42.484 "name": "spare", 00:15:42.484 "uuid": "7c562afa-d461-556a-8707-a821bb96e873", 00:15:42.484 "is_configured": true, 00:15:42.484 "data_offset": 2048, 00:15:42.484 "data_size": 63488 00:15:42.484 }, 00:15:42.484 { 00:15:42.484 "name": null, 00:15:42.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.484 "is_configured": false, 00:15:42.484 "data_offset": 2048, 00:15:42.484 "data_size": 63488 00:15:42.484 }, 00:15:42.484 { 00:15:42.484 "name": "BaseBdev3", 00:15:42.484 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:42.484 "is_configured": true, 00:15:42.484 "data_offset": 2048, 00:15:42.484 "data_size": 63488 00:15:42.484 }, 00:15:42.484 { 00:15:42.484 "name": "BaseBdev4", 00:15:42.484 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:42.484 "is_configured": true, 00:15:42.484 "data_offset": 2048, 00:15:42.484 "data_size": 63488 00:15:42.484 } 00:15:42.484 ] 00:15:42.484 }' 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.484 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.742 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.742 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.742 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.742 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.742 [2024-11-19 10:09:56.767419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.742 [2024-11-19 10:09:56.821108] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.742 [2024-11-19 10:09:56.821309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.742 [2024-11-19 10:09:56.821345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.742 [2024-11-19 10:09:56.821358] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.742 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.742 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.743 "name": "raid_bdev1", 00:15:42.743 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:42.743 "strip_size_kb": 0, 00:15:42.743 "state": "online", 00:15:42.743 "raid_level": "raid1", 00:15:42.743 "superblock": true, 00:15:42.743 "num_base_bdevs": 4, 00:15:42.743 "num_base_bdevs_discovered": 2, 00:15:42.743 "num_base_bdevs_operational": 2, 00:15:42.743 "base_bdevs_list": [ 00:15:42.743 { 00:15:42.743 "name": null, 00:15:42.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.743 "is_configured": false, 00:15:42.743 "data_offset": 0, 00:15:42.743 "data_size": 63488 00:15:42.743 }, 00:15:42.743 { 00:15:42.743 "name": null, 00:15:42.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.743 "is_configured": false, 00:15:42.743 "data_offset": 2048, 00:15:42.743 "data_size": 63488 00:15:42.743 }, 00:15:42.743 { 00:15:42.743 "name": "BaseBdev3", 00:15:42.743 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:42.743 "is_configured": true, 00:15:42.743 "data_offset": 2048, 00:15:42.743 "data_size": 63488 00:15:42.743 }, 00:15:42.743 { 00:15:42.743 "name": "BaseBdev4", 00:15:42.743 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:42.743 "is_configured": true, 00:15:42.743 "data_offset": 2048, 00:15:42.743 "data_size": 63488 00:15:42.743 } 00:15:42.743 ] 00:15:42.743 }' 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.743 10:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.308 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.308 "name": "raid_bdev1", 00:15:43.308 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:43.308 "strip_size_kb": 0, 00:15:43.308 "state": "online", 00:15:43.308 "raid_level": "raid1", 00:15:43.308 "superblock": true, 00:15:43.308 "num_base_bdevs": 4, 00:15:43.308 "num_base_bdevs_discovered": 2, 00:15:43.308 "num_base_bdevs_operational": 2, 00:15:43.309 "base_bdevs_list": [ 00:15:43.309 { 00:15:43.309 "name": null, 00:15:43.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.309 "is_configured": false, 00:15:43.309 "data_offset": 0, 00:15:43.309 "data_size": 63488 00:15:43.309 }, 00:15:43.309 { 00:15:43.309 "name": null, 00:15:43.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.309 "is_configured": false, 00:15:43.309 "data_offset": 2048, 00:15:43.309 "data_size": 63488 00:15:43.309 }, 00:15:43.309 { 00:15:43.309 "name": "BaseBdev3", 00:15:43.309 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:43.309 "is_configured": true, 00:15:43.309 "data_offset": 2048, 00:15:43.309 "data_size": 63488 00:15:43.309 }, 00:15:43.309 { 00:15:43.309 "name": "BaseBdev4", 00:15:43.309 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:43.309 "is_configured": true, 00:15:43.309 "data_offset": 2048, 00:15:43.309 "data_size": 63488 00:15:43.309 } 00:15:43.309 ] 00:15:43.309 }' 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.309 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.309 [2024-11-19 10:09:57.538137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.309 [2024-11-19 10:09:57.538234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.309 [2024-11-19 10:09:57.538282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:43.309 [2024-11-19 10:09:57.538299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.309 [2024-11-19 10:09:57.538994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.309 [2024-11-19 10:09:57.539030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.309 [2024-11-19 10:09:57.539161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:43.309 [2024-11-19 10:09:57.539193] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:43.309 [2024-11-19 10:09:57.539211] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:43.309 [2024-11-19 10:09:57.539227] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:43.567 BaseBdev1 00:15:43.567 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.567 10:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.503 "name": "raid_bdev1", 00:15:44.503 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:44.503 "strip_size_kb": 0, 00:15:44.503 "state": "online", 00:15:44.503 "raid_level": "raid1", 00:15:44.503 "superblock": true, 00:15:44.503 "num_base_bdevs": 4, 00:15:44.503 "num_base_bdevs_discovered": 2, 00:15:44.503 "num_base_bdevs_operational": 2, 00:15:44.503 "base_bdevs_list": [ 00:15:44.503 { 00:15:44.503 "name": null, 00:15:44.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.503 "is_configured": false, 00:15:44.503 "data_offset": 0, 00:15:44.503 "data_size": 63488 00:15:44.503 }, 00:15:44.503 { 00:15:44.503 "name": null, 00:15:44.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.503 "is_configured": false, 00:15:44.503 "data_offset": 2048, 00:15:44.503 "data_size": 63488 00:15:44.503 }, 00:15:44.503 { 00:15:44.503 "name": "BaseBdev3", 00:15:44.503 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:44.503 "is_configured": true, 00:15:44.503 "data_offset": 2048, 00:15:44.503 "data_size": 63488 00:15:44.503 }, 00:15:44.503 { 00:15:44.503 "name": "BaseBdev4", 00:15:44.503 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:44.503 "is_configured": true, 00:15:44.503 "data_offset": 2048, 00:15:44.503 "data_size": 63488 00:15:44.503 } 00:15:44.503 ] 00:15:44.503 }' 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.503 10:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.070 "name": "raid_bdev1", 00:15:45.070 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:45.070 "strip_size_kb": 0, 00:15:45.070 "state": "online", 00:15:45.070 "raid_level": "raid1", 00:15:45.070 "superblock": true, 00:15:45.070 "num_base_bdevs": 4, 00:15:45.070 "num_base_bdevs_discovered": 2, 00:15:45.070 "num_base_bdevs_operational": 2, 00:15:45.070 "base_bdevs_list": [ 00:15:45.070 { 00:15:45.070 "name": null, 00:15:45.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.070 "is_configured": false, 00:15:45.070 "data_offset": 0, 00:15:45.070 "data_size": 63488 00:15:45.070 }, 00:15:45.070 { 00:15:45.070 "name": null, 00:15:45.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.070 "is_configured": false, 00:15:45.070 "data_offset": 2048, 00:15:45.070 "data_size": 63488 00:15:45.070 }, 00:15:45.070 { 00:15:45.070 "name": "BaseBdev3", 00:15:45.070 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:45.070 "is_configured": true, 00:15:45.070 "data_offset": 2048, 00:15:45.070 "data_size": 63488 00:15:45.070 }, 00:15:45.070 { 00:15:45.070 "name": "BaseBdev4", 00:15:45.070 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:45.070 "is_configured": true, 00:15:45.070 "data_offset": 2048, 00:15:45.070 "data_size": 63488 00:15:45.070 } 00:15:45.070 ] 00:15:45.070 }' 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.070 [2024-11-19 10:09:59.182852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.070 [2024-11-19 10:09:59.183247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:45.070 [2024-11-19 10:09:59.183280] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.070 request: 00:15:45.070 { 00:15:45.070 "base_bdev": "BaseBdev1", 00:15:45.070 "raid_bdev": "raid_bdev1", 00:15:45.070 "method": "bdev_raid_add_base_bdev", 00:15:45.070 "req_id": 1 00:15:45.070 } 00:15:45.070 Got JSON-RPC error response 00:15:45.070 response: 00:15:45.070 { 00:15:45.070 "code": -22, 00:15:45.070 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:45.070 } 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.070 10:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.004 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.262 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.262 "name": "raid_bdev1", 00:15:46.262 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:46.262 "strip_size_kb": 0, 00:15:46.262 "state": "online", 00:15:46.262 "raid_level": "raid1", 00:15:46.262 "superblock": true, 00:15:46.262 "num_base_bdevs": 4, 00:15:46.262 "num_base_bdevs_discovered": 2, 00:15:46.262 "num_base_bdevs_operational": 2, 00:15:46.262 "base_bdevs_list": [ 00:15:46.262 { 00:15:46.262 "name": null, 00:15:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.262 "is_configured": false, 00:15:46.262 "data_offset": 0, 00:15:46.262 "data_size": 63488 00:15:46.262 }, 00:15:46.262 { 00:15:46.262 "name": null, 00:15:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.262 "is_configured": false, 00:15:46.262 "data_offset": 2048, 00:15:46.262 "data_size": 63488 00:15:46.262 }, 00:15:46.262 { 00:15:46.262 "name": "BaseBdev3", 00:15:46.262 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:46.262 "is_configured": true, 00:15:46.262 "data_offset": 2048, 00:15:46.262 "data_size": 63488 00:15:46.262 }, 00:15:46.262 { 00:15:46.262 "name": "BaseBdev4", 00:15:46.262 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:46.262 "is_configured": true, 00:15:46.262 "data_offset": 2048, 00:15:46.262 "data_size": 63488 00:15:46.262 } 00:15:46.262 ] 00:15:46.262 }' 00:15:46.262 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.262 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.520 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.778 "name": "raid_bdev1", 00:15:46.778 "uuid": "d5a5e99f-7cee-47d6-88bc-895922205a0c", 00:15:46.778 "strip_size_kb": 0, 00:15:46.778 "state": "online", 00:15:46.778 "raid_level": "raid1", 00:15:46.778 "superblock": true, 00:15:46.778 "num_base_bdevs": 4, 00:15:46.778 "num_base_bdevs_discovered": 2, 00:15:46.778 "num_base_bdevs_operational": 2, 00:15:46.778 "base_bdevs_list": [ 00:15:46.778 { 00:15:46.778 "name": null, 00:15:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.778 "is_configured": false, 00:15:46.778 "data_offset": 0, 00:15:46.778 "data_size": 63488 00:15:46.778 }, 00:15:46.778 { 00:15:46.778 "name": null, 00:15:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.778 "is_configured": false, 00:15:46.778 "data_offset": 2048, 00:15:46.778 "data_size": 63488 00:15:46.778 }, 00:15:46.778 { 00:15:46.778 "name": "BaseBdev3", 00:15:46.778 "uuid": "bd1be4da-f002-5907-9f2f-093f1fbe5cb9", 00:15:46.778 "is_configured": true, 00:15:46.778 "data_offset": 2048, 00:15:46.778 "data_size": 63488 00:15:46.778 }, 00:15:46.778 { 00:15:46.778 "name": "BaseBdev4", 00:15:46.778 "uuid": "cb40dee3-afe3-533d-8de6-f2caa20ec7ab", 00:15:46.778 "is_configured": true, 00:15:46.778 "data_offset": 2048, 00:15:46.778 "data_size": 63488 00:15:46.778 } 00:15:46.778 ] 00:15:46.778 }' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79469 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79469 ']' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79469 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79469 00:15:46.778 killing process with pid 79469 00:15:46.778 Received shutdown signal, test time was about 19.578513 seconds 00:15:46.778 00:15:46.778 Latency(us) 00:15:46.778 [2024-11-19T10:10:01.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.778 [2024-11-19T10:10:01.010Z] =================================================================================================================== 00:15:46.778 [2024-11-19T10:10:01.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79469' 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79469 00:15:46.778 [2024-11-19 10:10:00.910778] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.778 10:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79469 00:15:46.778 [2024-11-19 10:10:00.910983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.778 [2024-11-19 10:10:00.911089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.778 [2024-11-19 10:10:00.911114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:47.344 [2024-11-19 10:10:01.321120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.277 10:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:48.277 00:15:48.277 real 0m23.368s 00:15:48.277 user 0m31.562s 00:15:48.277 sys 0m2.578s 00:15:48.277 10:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.277 10:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.277 ************************************ 00:15:48.277 END TEST raid_rebuild_test_sb_io 00:15:48.277 ************************************ 00:15:48.535 10:10:02 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:48.535 10:10:02 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:48.535 10:10:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:48.535 10:10:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.535 10:10:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.535 ************************************ 00:15:48.535 START TEST raid5f_state_function_test 00:15:48.535 ************************************ 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80208 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80208' 00:15:48.535 Process raid pid: 80208 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80208 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80208 ']' 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.535 10:10:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.535 [2024-11-19 10:10:02.651923] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:15:48.535 [2024-11-19 10:10:02.652112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.794 [2024-11-19 10:10:02.831976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.794 [2024-11-19 10:10:02.980197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.052 [2024-11-19 10:10:03.210412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.052 [2024-11-19 10:10:03.210488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.619 [2024-11-19 10:10:03.611284] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.619 [2024-11-19 10:10:03.611359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.619 [2024-11-19 10:10:03.611378] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.619 [2024-11-19 10:10:03.611396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.619 [2024-11-19 10:10:03.611407] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.619 [2024-11-19 10:10:03.611422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.619 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.620 "name": "Existed_Raid", 00:15:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.620 "strip_size_kb": 64, 00:15:49.620 "state": "configuring", 00:15:49.620 "raid_level": "raid5f", 00:15:49.620 "superblock": false, 00:15:49.620 "num_base_bdevs": 3, 00:15:49.620 "num_base_bdevs_discovered": 0, 00:15:49.620 "num_base_bdevs_operational": 3, 00:15:49.620 "base_bdevs_list": [ 00:15:49.620 { 00:15:49.620 "name": "BaseBdev1", 00:15:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.620 "is_configured": false, 00:15:49.620 "data_offset": 0, 00:15:49.620 "data_size": 0 00:15:49.620 }, 00:15:49.620 { 00:15:49.620 "name": "BaseBdev2", 00:15:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.620 "is_configured": false, 00:15:49.620 "data_offset": 0, 00:15:49.620 "data_size": 0 00:15:49.620 }, 00:15:49.620 { 00:15:49.620 "name": "BaseBdev3", 00:15:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.620 "is_configured": false, 00:15:49.620 "data_offset": 0, 00:15:49.620 "data_size": 0 00:15:49.620 } 00:15:49.620 ] 00:15:49.620 }' 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.620 10:10:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 [2024-11-19 10:10:04.139445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.188 [2024-11-19 10:10:04.139514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 [2024-11-19 10:10:04.147342] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.188 [2024-11-19 10:10:04.147559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.188 [2024-11-19 10:10:04.147587] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.188 [2024-11-19 10:10:04.147607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.188 [2024-11-19 10:10:04.147618] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.188 [2024-11-19 10:10:04.147633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 [2024-11-19 10:10:04.196413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.188 BaseBdev1 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 [ 00:15:50.188 { 00:15:50.188 "name": "BaseBdev1", 00:15:50.188 "aliases": [ 00:15:50.188 "7803ebd1-a5e9-40db-b84c-7ac3c1131bde" 00:15:50.188 ], 00:15:50.188 "product_name": "Malloc disk", 00:15:50.188 "block_size": 512, 00:15:50.188 "num_blocks": 65536, 00:15:50.188 "uuid": "7803ebd1-a5e9-40db-b84c-7ac3c1131bde", 00:15:50.188 "assigned_rate_limits": { 00:15:50.188 "rw_ios_per_sec": 0, 00:15:50.188 "rw_mbytes_per_sec": 0, 00:15:50.188 "r_mbytes_per_sec": 0, 00:15:50.188 "w_mbytes_per_sec": 0 00:15:50.188 }, 00:15:50.188 "claimed": true, 00:15:50.188 "claim_type": "exclusive_write", 00:15:50.188 "zoned": false, 00:15:50.188 "supported_io_types": { 00:15:50.188 "read": true, 00:15:50.188 "write": true, 00:15:50.188 "unmap": true, 00:15:50.188 "flush": true, 00:15:50.188 "reset": true, 00:15:50.188 "nvme_admin": false, 00:15:50.188 "nvme_io": false, 00:15:50.188 "nvme_io_md": false, 00:15:50.188 "write_zeroes": true, 00:15:50.188 "zcopy": true, 00:15:50.188 "get_zone_info": false, 00:15:50.188 "zone_management": false, 00:15:50.188 "zone_append": false, 00:15:50.188 "compare": false, 00:15:50.188 "compare_and_write": false, 00:15:50.188 "abort": true, 00:15:50.188 "seek_hole": false, 00:15:50.188 "seek_data": false, 00:15:50.188 "copy": true, 00:15:50.188 "nvme_iov_md": false 00:15:50.188 }, 00:15:50.188 "memory_domains": [ 00:15:50.188 { 00:15:50.188 "dma_device_id": "system", 00:15:50.188 "dma_device_type": 1 00:15:50.188 }, 00:15:50.188 { 00:15:50.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.188 "dma_device_type": 2 00:15:50.188 } 00:15:50.188 ], 00:15:50.188 "driver_specific": {} 00:15:50.188 } 00:15:50.188 ] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.188 "name": "Existed_Raid", 00:15:50.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.188 "strip_size_kb": 64, 00:15:50.188 "state": "configuring", 00:15:50.188 "raid_level": "raid5f", 00:15:50.188 "superblock": false, 00:15:50.188 "num_base_bdevs": 3, 00:15:50.188 "num_base_bdevs_discovered": 1, 00:15:50.188 "num_base_bdevs_operational": 3, 00:15:50.188 "base_bdevs_list": [ 00:15:50.188 { 00:15:50.188 "name": "BaseBdev1", 00:15:50.188 "uuid": "7803ebd1-a5e9-40db-b84c-7ac3c1131bde", 00:15:50.188 "is_configured": true, 00:15:50.188 "data_offset": 0, 00:15:50.188 "data_size": 65536 00:15:50.188 }, 00:15:50.188 { 00:15:50.188 "name": "BaseBdev2", 00:15:50.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.188 "is_configured": false, 00:15:50.188 "data_offset": 0, 00:15:50.188 "data_size": 0 00:15:50.188 }, 00:15:50.188 { 00:15:50.188 "name": "BaseBdev3", 00:15:50.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.188 "is_configured": false, 00:15:50.188 "data_offset": 0, 00:15:50.188 "data_size": 0 00:15:50.188 } 00:15:50.188 ] 00:15:50.188 }' 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.188 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.755 [2024-11-19 10:10:04.764670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.755 [2024-11-19 10:10:04.764900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.755 [2024-11-19 10:10:04.772702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.755 [2024-11-19 10:10:04.775428] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.755 [2024-11-19 10:10:04.775606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.755 [2024-11-19 10:10:04.775634] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.755 [2024-11-19 10:10:04.775652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.755 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.756 "name": "Existed_Raid", 00:15:50.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.756 "strip_size_kb": 64, 00:15:50.756 "state": "configuring", 00:15:50.756 "raid_level": "raid5f", 00:15:50.756 "superblock": false, 00:15:50.756 "num_base_bdevs": 3, 00:15:50.756 "num_base_bdevs_discovered": 1, 00:15:50.756 "num_base_bdevs_operational": 3, 00:15:50.756 "base_bdevs_list": [ 00:15:50.756 { 00:15:50.756 "name": "BaseBdev1", 00:15:50.756 "uuid": "7803ebd1-a5e9-40db-b84c-7ac3c1131bde", 00:15:50.756 "is_configured": true, 00:15:50.756 "data_offset": 0, 00:15:50.756 "data_size": 65536 00:15:50.756 }, 00:15:50.756 { 00:15:50.756 "name": "BaseBdev2", 00:15:50.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.756 "is_configured": false, 00:15:50.756 "data_offset": 0, 00:15:50.756 "data_size": 0 00:15:50.756 }, 00:15:50.756 { 00:15:50.756 "name": "BaseBdev3", 00:15:50.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.756 "is_configured": false, 00:15:50.756 "data_offset": 0, 00:15:50.756 "data_size": 0 00:15:50.756 } 00:15:50.756 ] 00:15:50.756 }' 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.756 10:10:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 [2024-11-19 10:10:05.335687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.323 BaseBdev2 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 [ 00:15:51.323 { 00:15:51.323 "name": "BaseBdev2", 00:15:51.323 "aliases": [ 00:15:51.323 "e7bc7c4b-5361-444d-8b1f-6a320055fc43" 00:15:51.323 ], 00:15:51.323 "product_name": "Malloc disk", 00:15:51.323 "block_size": 512, 00:15:51.323 "num_blocks": 65536, 00:15:51.323 "uuid": "e7bc7c4b-5361-444d-8b1f-6a320055fc43", 00:15:51.323 "assigned_rate_limits": { 00:15:51.323 "rw_ios_per_sec": 0, 00:15:51.323 "rw_mbytes_per_sec": 0, 00:15:51.323 "r_mbytes_per_sec": 0, 00:15:51.323 "w_mbytes_per_sec": 0 00:15:51.323 }, 00:15:51.323 "claimed": true, 00:15:51.323 "claim_type": "exclusive_write", 00:15:51.323 "zoned": false, 00:15:51.323 "supported_io_types": { 00:15:51.323 "read": true, 00:15:51.323 "write": true, 00:15:51.323 "unmap": true, 00:15:51.323 "flush": true, 00:15:51.323 "reset": true, 00:15:51.323 "nvme_admin": false, 00:15:51.323 "nvme_io": false, 00:15:51.323 "nvme_io_md": false, 00:15:51.323 "write_zeroes": true, 00:15:51.323 "zcopy": true, 00:15:51.323 "get_zone_info": false, 00:15:51.323 "zone_management": false, 00:15:51.323 "zone_append": false, 00:15:51.323 "compare": false, 00:15:51.323 "compare_and_write": false, 00:15:51.323 "abort": true, 00:15:51.323 "seek_hole": false, 00:15:51.323 "seek_data": false, 00:15:51.323 "copy": true, 00:15:51.323 "nvme_iov_md": false 00:15:51.323 }, 00:15:51.323 "memory_domains": [ 00:15:51.323 { 00:15:51.323 "dma_device_id": "system", 00:15:51.323 "dma_device_type": 1 00:15:51.323 }, 00:15:51.323 { 00:15:51.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.323 "dma_device_type": 2 00:15:51.323 } 00:15:51.323 ], 00:15:51.323 "driver_specific": {} 00:15:51.323 } 00:15:51.323 ] 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.323 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.323 "name": "Existed_Raid", 00:15:51.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.323 "strip_size_kb": 64, 00:15:51.323 "state": "configuring", 00:15:51.323 "raid_level": "raid5f", 00:15:51.323 "superblock": false, 00:15:51.323 "num_base_bdevs": 3, 00:15:51.323 "num_base_bdevs_discovered": 2, 00:15:51.323 "num_base_bdevs_operational": 3, 00:15:51.323 "base_bdevs_list": [ 00:15:51.323 { 00:15:51.323 "name": "BaseBdev1", 00:15:51.323 "uuid": "7803ebd1-a5e9-40db-b84c-7ac3c1131bde", 00:15:51.323 "is_configured": true, 00:15:51.323 "data_offset": 0, 00:15:51.323 "data_size": 65536 00:15:51.323 }, 00:15:51.323 { 00:15:51.323 "name": "BaseBdev2", 00:15:51.324 "uuid": "e7bc7c4b-5361-444d-8b1f-6a320055fc43", 00:15:51.324 "is_configured": true, 00:15:51.324 "data_offset": 0, 00:15:51.324 "data_size": 65536 00:15:51.324 }, 00:15:51.324 { 00:15:51.324 "name": "BaseBdev3", 00:15:51.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.324 "is_configured": false, 00:15:51.324 "data_offset": 0, 00:15:51.324 "data_size": 0 00:15:51.324 } 00:15:51.324 ] 00:15:51.324 }' 00:15:51.324 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.324 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 [2024-11-19 10:10:05.881116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.890 [2024-11-19 10:10:05.881218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.890 [2024-11-19 10:10:05.881242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:51.890 [2024-11-19 10:10:05.881616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:51.890 BaseBdev3 00:15:51.890 [2024-11-19 10:10:05.887113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.890 [2024-11-19 10:10:05.887143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:51.890 [2024-11-19 10:10:05.887617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.890 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 [ 00:15:51.890 { 00:15:51.890 "name": "BaseBdev3", 00:15:51.890 "aliases": [ 00:15:51.890 "e4242089-ba22-46a2-a0c9-cf2b7f3389a3" 00:15:51.890 ], 00:15:51.890 "product_name": "Malloc disk", 00:15:51.890 "block_size": 512, 00:15:51.890 "num_blocks": 65536, 00:15:51.890 "uuid": "e4242089-ba22-46a2-a0c9-cf2b7f3389a3", 00:15:51.891 "assigned_rate_limits": { 00:15:51.891 "rw_ios_per_sec": 0, 00:15:51.891 "rw_mbytes_per_sec": 0, 00:15:51.891 "r_mbytes_per_sec": 0, 00:15:51.891 "w_mbytes_per_sec": 0 00:15:51.891 }, 00:15:51.891 "claimed": true, 00:15:51.891 "claim_type": "exclusive_write", 00:15:51.891 "zoned": false, 00:15:51.891 "supported_io_types": { 00:15:51.891 "read": true, 00:15:51.891 "write": true, 00:15:51.891 "unmap": true, 00:15:51.891 "flush": true, 00:15:51.891 "reset": true, 00:15:51.891 "nvme_admin": false, 00:15:51.891 "nvme_io": false, 00:15:51.891 "nvme_io_md": false, 00:15:51.891 "write_zeroes": true, 00:15:51.891 "zcopy": true, 00:15:51.891 "get_zone_info": false, 00:15:51.891 "zone_management": false, 00:15:51.891 "zone_append": false, 00:15:51.891 "compare": false, 00:15:51.891 "compare_and_write": false, 00:15:51.891 "abort": true, 00:15:51.891 "seek_hole": false, 00:15:51.891 "seek_data": false, 00:15:51.891 "copy": true, 00:15:51.891 "nvme_iov_md": false 00:15:51.891 }, 00:15:51.891 "memory_domains": [ 00:15:51.891 { 00:15:51.891 "dma_device_id": "system", 00:15:51.891 "dma_device_type": 1 00:15:51.891 }, 00:15:51.891 { 00:15:51.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.891 "dma_device_type": 2 00:15:51.891 } 00:15:51.891 ], 00:15:51.891 "driver_specific": {} 00:15:51.891 } 00:15:51.891 ] 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.891 "name": "Existed_Raid", 00:15:51.891 "uuid": "3d39f9d3-a801-4b12-a940-d9ae186d18a1", 00:15:51.891 "strip_size_kb": 64, 00:15:51.891 "state": "online", 00:15:51.891 "raid_level": "raid5f", 00:15:51.891 "superblock": false, 00:15:51.891 "num_base_bdevs": 3, 00:15:51.891 "num_base_bdevs_discovered": 3, 00:15:51.891 "num_base_bdevs_operational": 3, 00:15:51.891 "base_bdevs_list": [ 00:15:51.891 { 00:15:51.891 "name": "BaseBdev1", 00:15:51.891 "uuid": "7803ebd1-a5e9-40db-b84c-7ac3c1131bde", 00:15:51.891 "is_configured": true, 00:15:51.891 "data_offset": 0, 00:15:51.891 "data_size": 65536 00:15:51.891 }, 00:15:51.891 { 00:15:51.891 "name": "BaseBdev2", 00:15:51.891 "uuid": "e7bc7c4b-5361-444d-8b1f-6a320055fc43", 00:15:51.891 "is_configured": true, 00:15:51.891 "data_offset": 0, 00:15:51.891 "data_size": 65536 00:15:51.891 }, 00:15:51.891 { 00:15:51.891 "name": "BaseBdev3", 00:15:51.891 "uuid": "e4242089-ba22-46a2-a0c9-cf2b7f3389a3", 00:15:51.891 "is_configured": true, 00:15:51.891 "data_offset": 0, 00:15:51.891 "data_size": 65536 00:15:51.891 } 00:15:51.891 ] 00:15:51.891 }' 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.891 10:10:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 [2024-11-19 10:10:06.454197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.458 "name": "Existed_Raid", 00:15:52.458 "aliases": [ 00:15:52.458 "3d39f9d3-a801-4b12-a940-d9ae186d18a1" 00:15:52.458 ], 00:15:52.458 "product_name": "Raid Volume", 00:15:52.458 "block_size": 512, 00:15:52.458 "num_blocks": 131072, 00:15:52.458 "uuid": "3d39f9d3-a801-4b12-a940-d9ae186d18a1", 00:15:52.458 "assigned_rate_limits": { 00:15:52.458 "rw_ios_per_sec": 0, 00:15:52.458 "rw_mbytes_per_sec": 0, 00:15:52.458 "r_mbytes_per_sec": 0, 00:15:52.458 "w_mbytes_per_sec": 0 00:15:52.458 }, 00:15:52.458 "claimed": false, 00:15:52.458 "zoned": false, 00:15:52.458 "supported_io_types": { 00:15:52.458 "read": true, 00:15:52.458 "write": true, 00:15:52.458 "unmap": false, 00:15:52.458 "flush": false, 00:15:52.458 "reset": true, 00:15:52.458 "nvme_admin": false, 00:15:52.458 "nvme_io": false, 00:15:52.458 "nvme_io_md": false, 00:15:52.458 "write_zeroes": true, 00:15:52.458 "zcopy": false, 00:15:52.458 "get_zone_info": false, 00:15:52.458 "zone_management": false, 00:15:52.458 "zone_append": false, 00:15:52.458 "compare": false, 00:15:52.458 "compare_and_write": false, 00:15:52.458 "abort": false, 00:15:52.458 "seek_hole": false, 00:15:52.458 "seek_data": false, 00:15:52.458 "copy": false, 00:15:52.458 "nvme_iov_md": false 00:15:52.458 }, 00:15:52.458 "driver_specific": { 00:15:52.458 "raid": { 00:15:52.458 "uuid": "3d39f9d3-a801-4b12-a940-d9ae186d18a1", 00:15:52.458 "strip_size_kb": 64, 00:15:52.458 "state": "online", 00:15:52.458 "raid_level": "raid5f", 00:15:52.458 "superblock": false, 00:15:52.458 "num_base_bdevs": 3, 00:15:52.458 "num_base_bdevs_discovered": 3, 00:15:52.458 "num_base_bdevs_operational": 3, 00:15:52.458 "base_bdevs_list": [ 00:15:52.458 { 00:15:52.458 "name": "BaseBdev1", 00:15:52.458 "uuid": "7803ebd1-a5e9-40db-b84c-7ac3c1131bde", 00:15:52.458 "is_configured": true, 00:15:52.458 "data_offset": 0, 00:15:52.458 "data_size": 65536 00:15:52.458 }, 00:15:52.458 { 00:15:52.458 "name": "BaseBdev2", 00:15:52.458 "uuid": "e7bc7c4b-5361-444d-8b1f-6a320055fc43", 00:15:52.458 "is_configured": true, 00:15:52.458 "data_offset": 0, 00:15:52.458 "data_size": 65536 00:15:52.458 }, 00:15:52.458 { 00:15:52.458 "name": "BaseBdev3", 00:15:52.458 "uuid": "e4242089-ba22-46a2-a0c9-cf2b7f3389a3", 00:15:52.458 "is_configured": true, 00:15:52.458 "data_offset": 0, 00:15:52.458 "data_size": 65536 00:15:52.458 } 00:15:52.458 ] 00:15:52.458 } 00:15:52.458 } 00:15:52.458 }' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:52.458 BaseBdev2 00:15:52.458 BaseBdev3' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.458 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.717 [2024-11-19 10:10:06.750086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.717 "name": "Existed_Raid", 00:15:52.717 "uuid": "3d39f9d3-a801-4b12-a940-d9ae186d18a1", 00:15:52.717 "strip_size_kb": 64, 00:15:52.717 "state": "online", 00:15:52.717 "raid_level": "raid5f", 00:15:52.717 "superblock": false, 00:15:52.717 "num_base_bdevs": 3, 00:15:52.717 "num_base_bdevs_discovered": 2, 00:15:52.717 "num_base_bdevs_operational": 2, 00:15:52.717 "base_bdevs_list": [ 00:15:52.717 { 00:15:52.717 "name": null, 00:15:52.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.717 "is_configured": false, 00:15:52.717 "data_offset": 0, 00:15:52.717 "data_size": 65536 00:15:52.717 }, 00:15:52.717 { 00:15:52.717 "name": "BaseBdev2", 00:15:52.717 "uuid": "e7bc7c4b-5361-444d-8b1f-6a320055fc43", 00:15:52.717 "is_configured": true, 00:15:52.717 "data_offset": 0, 00:15:52.717 "data_size": 65536 00:15:52.717 }, 00:15:52.717 { 00:15:52.717 "name": "BaseBdev3", 00:15:52.717 "uuid": "e4242089-ba22-46a2-a0c9-cf2b7f3389a3", 00:15:52.717 "is_configured": true, 00:15:52.717 "data_offset": 0, 00:15:52.717 "data_size": 65536 00:15:52.717 } 00:15:52.717 ] 00:15:52.717 }' 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.717 10:10:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.284 [2024-11-19 10:10:07.408373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.284 [2024-11-19 10:10:07.408738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.284 [2024-11-19 10:10:07.501899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.284 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.542 [2024-11-19 10:10:07.566005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.542 [2024-11-19 10:10:07.566224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.542 BaseBdev2 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.542 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.801 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.801 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.801 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.801 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.801 [ 00:15:53.801 { 00:15:53.801 "name": "BaseBdev2", 00:15:53.801 "aliases": [ 00:15:53.801 "83001392-f8a1-407e-ab5b-7f7dc1514869" 00:15:53.801 ], 00:15:53.801 "product_name": "Malloc disk", 00:15:53.801 "block_size": 512, 00:15:53.801 "num_blocks": 65536, 00:15:53.801 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:53.801 "assigned_rate_limits": { 00:15:53.801 "rw_ios_per_sec": 0, 00:15:53.801 "rw_mbytes_per_sec": 0, 00:15:53.801 "r_mbytes_per_sec": 0, 00:15:53.801 "w_mbytes_per_sec": 0 00:15:53.801 }, 00:15:53.801 "claimed": false, 00:15:53.801 "zoned": false, 00:15:53.801 "supported_io_types": { 00:15:53.801 "read": true, 00:15:53.801 "write": true, 00:15:53.801 "unmap": true, 00:15:53.801 "flush": true, 00:15:53.801 "reset": true, 00:15:53.801 "nvme_admin": false, 00:15:53.801 "nvme_io": false, 00:15:53.801 "nvme_io_md": false, 00:15:53.801 "write_zeroes": true, 00:15:53.801 "zcopy": true, 00:15:53.801 "get_zone_info": false, 00:15:53.801 "zone_management": false, 00:15:53.801 "zone_append": false, 00:15:53.801 "compare": false, 00:15:53.801 "compare_and_write": false, 00:15:53.801 "abort": true, 00:15:53.801 "seek_hole": false, 00:15:53.801 "seek_data": false, 00:15:53.801 "copy": true, 00:15:53.801 "nvme_iov_md": false 00:15:53.801 }, 00:15:53.801 "memory_domains": [ 00:15:53.801 { 00:15:53.801 "dma_device_id": "system", 00:15:53.801 "dma_device_type": 1 00:15:53.801 }, 00:15:53.801 { 00:15:53.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.801 "dma_device_type": 2 00:15:53.801 } 00:15:53.801 ], 00:15:53.801 "driver_specific": {} 00:15:53.801 } 00:15:53.802 ] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.802 BaseBdev3 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.802 [ 00:15:53.802 { 00:15:53.802 "name": "BaseBdev3", 00:15:53.802 "aliases": [ 00:15:53.802 "db6e7107-d4db-45e0-ae08-29237ab3e350" 00:15:53.802 ], 00:15:53.802 "product_name": "Malloc disk", 00:15:53.802 "block_size": 512, 00:15:53.802 "num_blocks": 65536, 00:15:53.802 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:53.802 "assigned_rate_limits": { 00:15:53.802 "rw_ios_per_sec": 0, 00:15:53.802 "rw_mbytes_per_sec": 0, 00:15:53.802 "r_mbytes_per_sec": 0, 00:15:53.802 "w_mbytes_per_sec": 0 00:15:53.802 }, 00:15:53.802 "claimed": false, 00:15:53.802 "zoned": false, 00:15:53.802 "supported_io_types": { 00:15:53.802 "read": true, 00:15:53.802 "write": true, 00:15:53.802 "unmap": true, 00:15:53.802 "flush": true, 00:15:53.802 "reset": true, 00:15:53.802 "nvme_admin": false, 00:15:53.802 "nvme_io": false, 00:15:53.802 "nvme_io_md": false, 00:15:53.802 "write_zeroes": true, 00:15:53.802 "zcopy": true, 00:15:53.802 "get_zone_info": false, 00:15:53.802 "zone_management": false, 00:15:53.802 "zone_append": false, 00:15:53.802 "compare": false, 00:15:53.802 "compare_and_write": false, 00:15:53.802 "abort": true, 00:15:53.802 "seek_hole": false, 00:15:53.802 "seek_data": false, 00:15:53.802 "copy": true, 00:15:53.802 "nvme_iov_md": false 00:15:53.802 }, 00:15:53.802 "memory_domains": [ 00:15:53.802 { 00:15:53.802 "dma_device_id": "system", 00:15:53.802 "dma_device_type": 1 00:15:53.802 }, 00:15:53.802 { 00:15:53.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.802 "dma_device_type": 2 00:15:53.802 } 00:15:53.802 ], 00:15:53.802 "driver_specific": {} 00:15:53.802 } 00:15:53.802 ] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.802 [2024-11-19 10:10:07.873163] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.802 [2024-11-19 10:10:07.873370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.802 [2024-11-19 10:10:07.873523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.802 [2024-11-19 10:10:07.876291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.802 "name": "Existed_Raid", 00:15:53.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.802 "strip_size_kb": 64, 00:15:53.802 "state": "configuring", 00:15:53.802 "raid_level": "raid5f", 00:15:53.802 "superblock": false, 00:15:53.802 "num_base_bdevs": 3, 00:15:53.802 "num_base_bdevs_discovered": 2, 00:15:53.802 "num_base_bdevs_operational": 3, 00:15:53.802 "base_bdevs_list": [ 00:15:53.802 { 00:15:53.802 "name": "BaseBdev1", 00:15:53.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.802 "is_configured": false, 00:15:53.802 "data_offset": 0, 00:15:53.802 "data_size": 0 00:15:53.802 }, 00:15:53.802 { 00:15:53.802 "name": "BaseBdev2", 00:15:53.802 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:53.802 "is_configured": true, 00:15:53.802 "data_offset": 0, 00:15:53.802 "data_size": 65536 00:15:53.802 }, 00:15:53.802 { 00:15:53.802 "name": "BaseBdev3", 00:15:53.802 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:53.802 "is_configured": true, 00:15:53.802 "data_offset": 0, 00:15:53.802 "data_size": 65536 00:15:53.802 } 00:15:53.802 ] 00:15:53.802 }' 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.802 10:10:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.368 [2024-11-19 10:10:08.373260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.368 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.369 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.369 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.369 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.369 "name": "Existed_Raid", 00:15:54.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.369 "strip_size_kb": 64, 00:15:54.369 "state": "configuring", 00:15:54.369 "raid_level": "raid5f", 00:15:54.369 "superblock": false, 00:15:54.369 "num_base_bdevs": 3, 00:15:54.369 "num_base_bdevs_discovered": 1, 00:15:54.369 "num_base_bdevs_operational": 3, 00:15:54.369 "base_bdevs_list": [ 00:15:54.369 { 00:15:54.369 "name": "BaseBdev1", 00:15:54.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.369 "is_configured": false, 00:15:54.369 "data_offset": 0, 00:15:54.369 "data_size": 0 00:15:54.369 }, 00:15:54.369 { 00:15:54.369 "name": null, 00:15:54.369 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:54.369 "is_configured": false, 00:15:54.369 "data_offset": 0, 00:15:54.369 "data_size": 65536 00:15:54.369 }, 00:15:54.369 { 00:15:54.369 "name": "BaseBdev3", 00:15:54.369 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:54.369 "is_configured": true, 00:15:54.369 "data_offset": 0, 00:15:54.369 "data_size": 65536 00:15:54.369 } 00:15:54.369 ] 00:15:54.369 }' 00:15:54.369 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.369 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.935 [2024-11-19 10:10:08.959266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.935 BaseBdev1 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.935 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.936 [ 00:15:54.936 { 00:15:54.936 "name": "BaseBdev1", 00:15:54.936 "aliases": [ 00:15:54.936 "8e330f5a-651f-4c82-8784-0950ec3a32b0" 00:15:54.936 ], 00:15:54.936 "product_name": "Malloc disk", 00:15:54.936 "block_size": 512, 00:15:54.936 "num_blocks": 65536, 00:15:54.936 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:54.936 "assigned_rate_limits": { 00:15:54.936 "rw_ios_per_sec": 0, 00:15:54.936 "rw_mbytes_per_sec": 0, 00:15:54.936 "r_mbytes_per_sec": 0, 00:15:54.936 "w_mbytes_per_sec": 0 00:15:54.936 }, 00:15:54.936 "claimed": true, 00:15:54.936 "claim_type": "exclusive_write", 00:15:54.936 "zoned": false, 00:15:54.936 "supported_io_types": { 00:15:54.936 "read": true, 00:15:54.936 "write": true, 00:15:54.936 "unmap": true, 00:15:54.936 "flush": true, 00:15:54.936 "reset": true, 00:15:54.936 "nvme_admin": false, 00:15:54.936 "nvme_io": false, 00:15:54.936 "nvme_io_md": false, 00:15:54.936 "write_zeroes": true, 00:15:54.936 "zcopy": true, 00:15:54.936 "get_zone_info": false, 00:15:54.936 "zone_management": false, 00:15:54.936 "zone_append": false, 00:15:54.936 "compare": false, 00:15:54.936 "compare_and_write": false, 00:15:54.936 "abort": true, 00:15:54.936 "seek_hole": false, 00:15:54.936 "seek_data": false, 00:15:54.936 "copy": true, 00:15:54.936 "nvme_iov_md": false 00:15:54.936 }, 00:15:54.936 "memory_domains": [ 00:15:54.936 { 00:15:54.936 "dma_device_id": "system", 00:15:54.936 "dma_device_type": 1 00:15:54.936 }, 00:15:54.936 { 00:15:54.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.936 "dma_device_type": 2 00:15:54.936 } 00:15:54.936 ], 00:15:54.936 "driver_specific": {} 00:15:54.936 } 00:15:54.936 ] 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.936 10:10:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.936 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.936 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.936 "name": "Existed_Raid", 00:15:54.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.936 "strip_size_kb": 64, 00:15:54.936 "state": "configuring", 00:15:54.936 "raid_level": "raid5f", 00:15:54.936 "superblock": false, 00:15:54.936 "num_base_bdevs": 3, 00:15:54.936 "num_base_bdevs_discovered": 2, 00:15:54.936 "num_base_bdevs_operational": 3, 00:15:54.936 "base_bdevs_list": [ 00:15:54.936 { 00:15:54.936 "name": "BaseBdev1", 00:15:54.936 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:54.936 "is_configured": true, 00:15:54.936 "data_offset": 0, 00:15:54.936 "data_size": 65536 00:15:54.936 }, 00:15:54.936 { 00:15:54.936 "name": null, 00:15:54.936 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:54.936 "is_configured": false, 00:15:54.936 "data_offset": 0, 00:15:54.936 "data_size": 65536 00:15:54.936 }, 00:15:54.936 { 00:15:54.936 "name": "BaseBdev3", 00:15:54.936 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:54.936 "is_configured": true, 00:15:54.936 "data_offset": 0, 00:15:54.936 "data_size": 65536 00:15:54.936 } 00:15:54.936 ] 00:15:54.936 }' 00:15:54.936 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.936 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.500 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.500 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.500 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 [2024-11-19 10:10:09.507515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.501 "name": "Existed_Raid", 00:15:55.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.501 "strip_size_kb": 64, 00:15:55.501 "state": "configuring", 00:15:55.501 "raid_level": "raid5f", 00:15:55.501 "superblock": false, 00:15:55.501 "num_base_bdevs": 3, 00:15:55.501 "num_base_bdevs_discovered": 1, 00:15:55.501 "num_base_bdevs_operational": 3, 00:15:55.501 "base_bdevs_list": [ 00:15:55.501 { 00:15:55.501 "name": "BaseBdev1", 00:15:55.501 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:55.501 "is_configured": true, 00:15:55.501 "data_offset": 0, 00:15:55.501 "data_size": 65536 00:15:55.501 }, 00:15:55.501 { 00:15:55.501 "name": null, 00:15:55.501 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:55.501 "is_configured": false, 00:15:55.501 "data_offset": 0, 00:15:55.501 "data_size": 65536 00:15:55.501 }, 00:15:55.501 { 00:15:55.501 "name": null, 00:15:55.501 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:55.501 "is_configured": false, 00:15:55.501 "data_offset": 0, 00:15:55.501 "data_size": 65536 00:15:55.501 } 00:15:55.501 ] 00:15:55.501 }' 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.501 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.069 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.069 10:10:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.069 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.069 10:10:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.069 [2024-11-19 10:10:10.043691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.069 "name": "Existed_Raid", 00:15:56.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.069 "strip_size_kb": 64, 00:15:56.069 "state": "configuring", 00:15:56.069 "raid_level": "raid5f", 00:15:56.069 "superblock": false, 00:15:56.069 "num_base_bdevs": 3, 00:15:56.069 "num_base_bdevs_discovered": 2, 00:15:56.069 "num_base_bdevs_operational": 3, 00:15:56.069 "base_bdevs_list": [ 00:15:56.069 { 00:15:56.069 "name": "BaseBdev1", 00:15:56.069 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:56.069 "is_configured": true, 00:15:56.069 "data_offset": 0, 00:15:56.069 "data_size": 65536 00:15:56.069 }, 00:15:56.069 { 00:15:56.069 "name": null, 00:15:56.069 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:56.069 "is_configured": false, 00:15:56.069 "data_offset": 0, 00:15:56.069 "data_size": 65536 00:15:56.069 }, 00:15:56.069 { 00:15:56.069 "name": "BaseBdev3", 00:15:56.069 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:56.069 "is_configured": true, 00:15:56.069 "data_offset": 0, 00:15:56.069 "data_size": 65536 00:15:56.069 } 00:15:56.069 ] 00:15:56.069 }' 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.069 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.636 [2024-11-19 10:10:10.651861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.636 "name": "Existed_Raid", 00:15:56.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.636 "strip_size_kb": 64, 00:15:56.636 "state": "configuring", 00:15:56.636 "raid_level": "raid5f", 00:15:56.636 "superblock": false, 00:15:56.636 "num_base_bdevs": 3, 00:15:56.636 "num_base_bdevs_discovered": 1, 00:15:56.636 "num_base_bdevs_operational": 3, 00:15:56.636 "base_bdevs_list": [ 00:15:56.636 { 00:15:56.636 "name": null, 00:15:56.636 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:56.636 "is_configured": false, 00:15:56.636 "data_offset": 0, 00:15:56.636 "data_size": 65536 00:15:56.636 }, 00:15:56.636 { 00:15:56.636 "name": null, 00:15:56.636 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:56.636 "is_configured": false, 00:15:56.636 "data_offset": 0, 00:15:56.636 "data_size": 65536 00:15:56.636 }, 00:15:56.636 { 00:15:56.636 "name": "BaseBdev3", 00:15:56.636 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:56.636 "is_configured": true, 00:15:56.636 "data_offset": 0, 00:15:56.636 "data_size": 65536 00:15:56.636 } 00:15:56.636 ] 00:15:56.636 }' 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.636 10:10:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.202 [2024-11-19 10:10:11.325438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.202 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.203 "name": "Existed_Raid", 00:15:57.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.203 "strip_size_kb": 64, 00:15:57.203 "state": "configuring", 00:15:57.203 "raid_level": "raid5f", 00:15:57.203 "superblock": false, 00:15:57.203 "num_base_bdevs": 3, 00:15:57.203 "num_base_bdevs_discovered": 2, 00:15:57.203 "num_base_bdevs_operational": 3, 00:15:57.203 "base_bdevs_list": [ 00:15:57.203 { 00:15:57.203 "name": null, 00:15:57.203 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:57.203 "is_configured": false, 00:15:57.203 "data_offset": 0, 00:15:57.203 "data_size": 65536 00:15:57.203 }, 00:15:57.203 { 00:15:57.203 "name": "BaseBdev2", 00:15:57.203 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:57.203 "is_configured": true, 00:15:57.203 "data_offset": 0, 00:15:57.203 "data_size": 65536 00:15:57.203 }, 00:15:57.203 { 00:15:57.203 "name": "BaseBdev3", 00:15:57.203 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:57.203 "is_configured": true, 00:15:57.203 "data_offset": 0, 00:15:57.203 "data_size": 65536 00:15:57.203 } 00:15:57.203 ] 00:15:57.203 }' 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.203 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8e330f5a-651f-4c82-8784-0950ec3a32b0 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.769 10:10:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 [2024-11-19 10:10:12.019959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:58.029 [2024-11-19 10:10:12.020048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:58.029 [2024-11-19 10:10:12.020066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:58.029 [2024-11-19 10:10:12.020433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:58.029 [2024-11-19 10:10:12.025552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:58.029 [2024-11-19 10:10:12.025586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:58.029 [2024-11-19 10:10:12.026000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.029 NewBaseBdev 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 [ 00:15:58.029 { 00:15:58.029 "name": "NewBaseBdev", 00:15:58.029 "aliases": [ 00:15:58.029 "8e330f5a-651f-4c82-8784-0950ec3a32b0" 00:15:58.029 ], 00:15:58.029 "product_name": "Malloc disk", 00:15:58.029 "block_size": 512, 00:15:58.029 "num_blocks": 65536, 00:15:58.029 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:58.029 "assigned_rate_limits": { 00:15:58.029 "rw_ios_per_sec": 0, 00:15:58.029 "rw_mbytes_per_sec": 0, 00:15:58.029 "r_mbytes_per_sec": 0, 00:15:58.029 "w_mbytes_per_sec": 0 00:15:58.029 }, 00:15:58.029 "claimed": true, 00:15:58.029 "claim_type": "exclusive_write", 00:15:58.029 "zoned": false, 00:15:58.029 "supported_io_types": { 00:15:58.029 "read": true, 00:15:58.029 "write": true, 00:15:58.029 "unmap": true, 00:15:58.029 "flush": true, 00:15:58.029 "reset": true, 00:15:58.029 "nvme_admin": false, 00:15:58.029 "nvme_io": false, 00:15:58.029 "nvme_io_md": false, 00:15:58.029 "write_zeroes": true, 00:15:58.029 "zcopy": true, 00:15:58.029 "get_zone_info": false, 00:15:58.029 "zone_management": false, 00:15:58.029 "zone_append": false, 00:15:58.029 "compare": false, 00:15:58.029 "compare_and_write": false, 00:15:58.029 "abort": true, 00:15:58.029 "seek_hole": false, 00:15:58.029 "seek_data": false, 00:15:58.029 "copy": true, 00:15:58.029 "nvme_iov_md": false 00:15:58.029 }, 00:15:58.029 "memory_domains": [ 00:15:58.029 { 00:15:58.029 "dma_device_id": "system", 00:15:58.029 "dma_device_type": 1 00:15:58.029 }, 00:15:58.029 { 00:15:58.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.029 "dma_device_type": 2 00:15:58.029 } 00:15:58.029 ], 00:15:58.029 "driver_specific": {} 00:15:58.029 } 00:15:58.029 ] 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.029 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.029 "name": "Existed_Raid", 00:15:58.029 "uuid": "7e4c5838-59db-419a-be1b-8a24bcd5c50e", 00:15:58.029 "strip_size_kb": 64, 00:15:58.029 "state": "online", 00:15:58.029 "raid_level": "raid5f", 00:15:58.029 "superblock": false, 00:15:58.029 "num_base_bdevs": 3, 00:15:58.029 "num_base_bdevs_discovered": 3, 00:15:58.029 "num_base_bdevs_operational": 3, 00:15:58.029 "base_bdevs_list": [ 00:15:58.029 { 00:15:58.029 "name": "NewBaseBdev", 00:15:58.029 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:58.029 "is_configured": true, 00:15:58.029 "data_offset": 0, 00:15:58.029 "data_size": 65536 00:15:58.029 }, 00:15:58.029 { 00:15:58.029 "name": "BaseBdev2", 00:15:58.029 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:58.029 "is_configured": true, 00:15:58.029 "data_offset": 0, 00:15:58.029 "data_size": 65536 00:15:58.029 }, 00:15:58.029 { 00:15:58.029 "name": "BaseBdev3", 00:15:58.029 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:58.029 "is_configured": true, 00:15:58.029 "data_offset": 0, 00:15:58.029 "data_size": 65536 00:15:58.030 } 00:15:58.030 ] 00:15:58.030 }' 00:15:58.030 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.030 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.597 [2024-11-19 10:10:12.588477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.597 "name": "Existed_Raid", 00:15:58.597 "aliases": [ 00:15:58.597 "7e4c5838-59db-419a-be1b-8a24bcd5c50e" 00:15:58.597 ], 00:15:58.597 "product_name": "Raid Volume", 00:15:58.597 "block_size": 512, 00:15:58.597 "num_blocks": 131072, 00:15:58.597 "uuid": "7e4c5838-59db-419a-be1b-8a24bcd5c50e", 00:15:58.597 "assigned_rate_limits": { 00:15:58.597 "rw_ios_per_sec": 0, 00:15:58.597 "rw_mbytes_per_sec": 0, 00:15:58.597 "r_mbytes_per_sec": 0, 00:15:58.597 "w_mbytes_per_sec": 0 00:15:58.597 }, 00:15:58.597 "claimed": false, 00:15:58.597 "zoned": false, 00:15:58.597 "supported_io_types": { 00:15:58.597 "read": true, 00:15:58.597 "write": true, 00:15:58.597 "unmap": false, 00:15:58.597 "flush": false, 00:15:58.597 "reset": true, 00:15:58.597 "nvme_admin": false, 00:15:58.597 "nvme_io": false, 00:15:58.597 "nvme_io_md": false, 00:15:58.597 "write_zeroes": true, 00:15:58.597 "zcopy": false, 00:15:58.597 "get_zone_info": false, 00:15:58.597 "zone_management": false, 00:15:58.597 "zone_append": false, 00:15:58.597 "compare": false, 00:15:58.597 "compare_and_write": false, 00:15:58.597 "abort": false, 00:15:58.597 "seek_hole": false, 00:15:58.597 "seek_data": false, 00:15:58.597 "copy": false, 00:15:58.597 "nvme_iov_md": false 00:15:58.597 }, 00:15:58.597 "driver_specific": { 00:15:58.597 "raid": { 00:15:58.597 "uuid": "7e4c5838-59db-419a-be1b-8a24bcd5c50e", 00:15:58.597 "strip_size_kb": 64, 00:15:58.597 "state": "online", 00:15:58.597 "raid_level": "raid5f", 00:15:58.597 "superblock": false, 00:15:58.597 "num_base_bdevs": 3, 00:15:58.597 "num_base_bdevs_discovered": 3, 00:15:58.597 "num_base_bdevs_operational": 3, 00:15:58.597 "base_bdevs_list": [ 00:15:58.597 { 00:15:58.597 "name": "NewBaseBdev", 00:15:58.597 "uuid": "8e330f5a-651f-4c82-8784-0950ec3a32b0", 00:15:58.597 "is_configured": true, 00:15:58.597 "data_offset": 0, 00:15:58.597 "data_size": 65536 00:15:58.597 }, 00:15:58.597 { 00:15:58.597 "name": "BaseBdev2", 00:15:58.597 "uuid": "83001392-f8a1-407e-ab5b-7f7dc1514869", 00:15:58.597 "is_configured": true, 00:15:58.597 "data_offset": 0, 00:15:58.597 "data_size": 65536 00:15:58.597 }, 00:15:58.597 { 00:15:58.597 "name": "BaseBdev3", 00:15:58.597 "uuid": "db6e7107-d4db-45e0-ae08-29237ab3e350", 00:15:58.597 "is_configured": true, 00:15:58.597 "data_offset": 0, 00:15:58.597 "data_size": 65536 00:15:58.597 } 00:15:58.597 ] 00:15:58.597 } 00:15:58.597 } 00:15:58.597 }' 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:58.597 BaseBdev2 00:15:58.597 BaseBdev3' 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:58.597 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.598 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.856 [2024-11-19 10:10:12.896332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.856 [2024-11-19 10:10:12.896375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.856 [2024-11-19 10:10:12.896509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.856 [2024-11-19 10:10:12.896944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.856 [2024-11-19 10:10:12.896982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80208 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80208 ']' 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80208 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80208 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.856 killing process with pid 80208 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80208' 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80208 00:15:58.856 [2024-11-19 10:10:12.937018] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.856 10:10:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80208 00:15:59.115 [2024-11-19 10:10:13.232660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:00.490 00:16:00.490 real 0m11.814s 00:16:00.490 user 0m19.340s 00:16:00.490 sys 0m1.689s 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.490 ************************************ 00:16:00.490 END TEST raid5f_state_function_test 00:16:00.490 ************************************ 00:16:00.490 10:10:14 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:00.490 10:10:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:00.490 10:10:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.490 10:10:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.490 ************************************ 00:16:00.490 START TEST raid5f_state_function_test_sb 00:16:00.490 ************************************ 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.490 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80841 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:00.491 Process raid pid: 80841 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80841' 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80841 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80841 ']' 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.491 10:10:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.491 [2024-11-19 10:10:14.544547] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:00.491 [2024-11-19 10:10:14.544738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.749 [2024-11-19 10:10:14.722216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.749 [2024-11-19 10:10:14.870030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.007 [2024-11-19 10:10:15.100213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.007 [2024-11-19 10:10:15.100297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.574 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.574 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:01.574 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:01.574 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.574 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.574 [2024-11-19 10:10:15.589409] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.574 [2024-11-19 10:10:15.589482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.574 [2024-11-19 10:10:15.589501] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.575 [2024-11-19 10:10:15.589519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.575 [2024-11-19 10:10:15.589530] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.575 [2024-11-19 10:10:15.589545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.575 "name": "Existed_Raid", 00:16:01.575 "uuid": "c947a598-4faf-42ef-a722-6867c10f4e13", 00:16:01.575 "strip_size_kb": 64, 00:16:01.575 "state": "configuring", 00:16:01.575 "raid_level": "raid5f", 00:16:01.575 "superblock": true, 00:16:01.575 "num_base_bdevs": 3, 00:16:01.575 "num_base_bdevs_discovered": 0, 00:16:01.575 "num_base_bdevs_operational": 3, 00:16:01.575 "base_bdevs_list": [ 00:16:01.575 { 00:16:01.575 "name": "BaseBdev1", 00:16:01.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.575 "is_configured": false, 00:16:01.575 "data_offset": 0, 00:16:01.575 "data_size": 0 00:16:01.575 }, 00:16:01.575 { 00:16:01.575 "name": "BaseBdev2", 00:16:01.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.575 "is_configured": false, 00:16:01.575 "data_offset": 0, 00:16:01.575 "data_size": 0 00:16:01.575 }, 00:16:01.575 { 00:16:01.575 "name": "BaseBdev3", 00:16:01.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.575 "is_configured": false, 00:16:01.575 "data_offset": 0, 00:16:01.575 "data_size": 0 00:16:01.575 } 00:16:01.575 ] 00:16:01.575 }' 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.575 10:10:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 [2024-11-19 10:10:16.121483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.141 [2024-11-19 10:10:16.121540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 [2024-11-19 10:10:16.129493] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.141 [2024-11-19 10:10:16.129565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.141 [2024-11-19 10:10:16.129583] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.141 [2024-11-19 10:10:16.129601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.141 [2024-11-19 10:10:16.129612] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.141 [2024-11-19 10:10:16.129627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 [2024-11-19 10:10:16.178593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.141 BaseBdev1 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 [ 00:16:02.141 { 00:16:02.141 "name": "BaseBdev1", 00:16:02.141 "aliases": [ 00:16:02.141 "70e9c15d-8443-4e05-b590-9667c73eac7b" 00:16:02.141 ], 00:16:02.141 "product_name": "Malloc disk", 00:16:02.141 "block_size": 512, 00:16:02.141 "num_blocks": 65536, 00:16:02.141 "uuid": "70e9c15d-8443-4e05-b590-9667c73eac7b", 00:16:02.141 "assigned_rate_limits": { 00:16:02.141 "rw_ios_per_sec": 0, 00:16:02.141 "rw_mbytes_per_sec": 0, 00:16:02.141 "r_mbytes_per_sec": 0, 00:16:02.141 "w_mbytes_per_sec": 0 00:16:02.141 }, 00:16:02.141 "claimed": true, 00:16:02.141 "claim_type": "exclusive_write", 00:16:02.141 "zoned": false, 00:16:02.141 "supported_io_types": { 00:16:02.141 "read": true, 00:16:02.141 "write": true, 00:16:02.141 "unmap": true, 00:16:02.141 "flush": true, 00:16:02.141 "reset": true, 00:16:02.141 "nvme_admin": false, 00:16:02.141 "nvme_io": false, 00:16:02.141 "nvme_io_md": false, 00:16:02.141 "write_zeroes": true, 00:16:02.141 "zcopy": true, 00:16:02.141 "get_zone_info": false, 00:16:02.141 "zone_management": false, 00:16:02.141 "zone_append": false, 00:16:02.141 "compare": false, 00:16:02.141 "compare_and_write": false, 00:16:02.141 "abort": true, 00:16:02.141 "seek_hole": false, 00:16:02.141 "seek_data": false, 00:16:02.141 "copy": true, 00:16:02.141 "nvme_iov_md": false 00:16:02.141 }, 00:16:02.141 "memory_domains": [ 00:16:02.141 { 00:16:02.141 "dma_device_id": "system", 00:16:02.141 "dma_device_type": 1 00:16:02.141 }, 00:16:02.141 { 00:16:02.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.141 "dma_device_type": 2 00:16:02.141 } 00:16:02.141 ], 00:16:02.141 "driver_specific": {} 00:16:02.141 } 00:16:02.141 ] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.141 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.142 "name": "Existed_Raid", 00:16:02.142 "uuid": "37f53d31-6e42-4bb0-88a5-f453b39837c1", 00:16:02.142 "strip_size_kb": 64, 00:16:02.142 "state": "configuring", 00:16:02.142 "raid_level": "raid5f", 00:16:02.142 "superblock": true, 00:16:02.142 "num_base_bdevs": 3, 00:16:02.142 "num_base_bdevs_discovered": 1, 00:16:02.142 "num_base_bdevs_operational": 3, 00:16:02.142 "base_bdevs_list": [ 00:16:02.142 { 00:16:02.142 "name": "BaseBdev1", 00:16:02.142 "uuid": "70e9c15d-8443-4e05-b590-9667c73eac7b", 00:16:02.142 "is_configured": true, 00:16:02.142 "data_offset": 2048, 00:16:02.142 "data_size": 63488 00:16:02.142 }, 00:16:02.142 { 00:16:02.142 "name": "BaseBdev2", 00:16:02.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.142 "is_configured": false, 00:16:02.142 "data_offset": 0, 00:16:02.142 "data_size": 0 00:16:02.142 }, 00:16:02.142 { 00:16:02.142 "name": "BaseBdev3", 00:16:02.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.142 "is_configured": false, 00:16:02.142 "data_offset": 0, 00:16:02.142 "data_size": 0 00:16:02.142 } 00:16:02.142 ] 00:16:02.142 }' 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.142 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 [2024-11-19 10:10:16.734835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.708 [2024-11-19 10:10:16.734914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 [2024-11-19 10:10:16.742928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.708 [2024-11-19 10:10:16.745668] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.708 [2024-11-19 10:10:16.745733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.708 [2024-11-19 10:10:16.745752] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.708 [2024-11-19 10:10:16.745768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:02.708 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.709 "name": "Existed_Raid", 00:16:02.709 "uuid": "db561df0-00d8-42af-869d-d74c60afe900", 00:16:02.709 "strip_size_kb": 64, 00:16:02.709 "state": "configuring", 00:16:02.709 "raid_level": "raid5f", 00:16:02.709 "superblock": true, 00:16:02.709 "num_base_bdevs": 3, 00:16:02.709 "num_base_bdevs_discovered": 1, 00:16:02.709 "num_base_bdevs_operational": 3, 00:16:02.709 "base_bdevs_list": [ 00:16:02.709 { 00:16:02.709 "name": "BaseBdev1", 00:16:02.709 "uuid": "70e9c15d-8443-4e05-b590-9667c73eac7b", 00:16:02.709 "is_configured": true, 00:16:02.709 "data_offset": 2048, 00:16:02.709 "data_size": 63488 00:16:02.709 }, 00:16:02.709 { 00:16:02.709 "name": "BaseBdev2", 00:16:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.709 "is_configured": false, 00:16:02.709 "data_offset": 0, 00:16:02.709 "data_size": 0 00:16:02.709 }, 00:16:02.709 { 00:16:02.709 "name": "BaseBdev3", 00:16:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.709 "is_configured": false, 00:16:02.709 "data_offset": 0, 00:16:02.709 "data_size": 0 00:16:02.709 } 00:16:02.709 ] 00:16:02.709 }' 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.709 10:10:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 [2024-11-19 10:10:17.297923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.277 BaseBdev2 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 [ 00:16:03.277 { 00:16:03.277 "name": "BaseBdev2", 00:16:03.277 "aliases": [ 00:16:03.277 "3ce0f2ba-8cd0-4509-a576-a537898fbd63" 00:16:03.277 ], 00:16:03.277 "product_name": "Malloc disk", 00:16:03.277 "block_size": 512, 00:16:03.277 "num_blocks": 65536, 00:16:03.277 "uuid": "3ce0f2ba-8cd0-4509-a576-a537898fbd63", 00:16:03.277 "assigned_rate_limits": { 00:16:03.277 "rw_ios_per_sec": 0, 00:16:03.277 "rw_mbytes_per_sec": 0, 00:16:03.277 "r_mbytes_per_sec": 0, 00:16:03.277 "w_mbytes_per_sec": 0 00:16:03.277 }, 00:16:03.277 "claimed": true, 00:16:03.277 "claim_type": "exclusive_write", 00:16:03.277 "zoned": false, 00:16:03.277 "supported_io_types": { 00:16:03.277 "read": true, 00:16:03.277 "write": true, 00:16:03.277 "unmap": true, 00:16:03.277 "flush": true, 00:16:03.277 "reset": true, 00:16:03.277 "nvme_admin": false, 00:16:03.277 "nvme_io": false, 00:16:03.277 "nvme_io_md": false, 00:16:03.277 "write_zeroes": true, 00:16:03.277 "zcopy": true, 00:16:03.277 "get_zone_info": false, 00:16:03.277 "zone_management": false, 00:16:03.277 "zone_append": false, 00:16:03.277 "compare": false, 00:16:03.277 "compare_and_write": false, 00:16:03.277 "abort": true, 00:16:03.277 "seek_hole": false, 00:16:03.277 "seek_data": false, 00:16:03.277 "copy": true, 00:16:03.277 "nvme_iov_md": false 00:16:03.277 }, 00:16:03.277 "memory_domains": [ 00:16:03.277 { 00:16:03.277 "dma_device_id": "system", 00:16:03.277 "dma_device_type": 1 00:16:03.277 }, 00:16:03.277 { 00:16:03.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.277 "dma_device_type": 2 00:16:03.277 } 00:16:03.277 ], 00:16:03.277 "driver_specific": {} 00:16:03.277 } 00:16:03.277 ] 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.277 "name": "Existed_Raid", 00:16:03.277 "uuid": "db561df0-00d8-42af-869d-d74c60afe900", 00:16:03.277 "strip_size_kb": 64, 00:16:03.277 "state": "configuring", 00:16:03.277 "raid_level": "raid5f", 00:16:03.277 "superblock": true, 00:16:03.277 "num_base_bdevs": 3, 00:16:03.277 "num_base_bdevs_discovered": 2, 00:16:03.277 "num_base_bdevs_operational": 3, 00:16:03.277 "base_bdevs_list": [ 00:16:03.277 { 00:16:03.277 "name": "BaseBdev1", 00:16:03.277 "uuid": "70e9c15d-8443-4e05-b590-9667c73eac7b", 00:16:03.277 "is_configured": true, 00:16:03.277 "data_offset": 2048, 00:16:03.277 "data_size": 63488 00:16:03.277 }, 00:16:03.277 { 00:16:03.277 "name": "BaseBdev2", 00:16:03.277 "uuid": "3ce0f2ba-8cd0-4509-a576-a537898fbd63", 00:16:03.277 "is_configured": true, 00:16:03.277 "data_offset": 2048, 00:16:03.277 "data_size": 63488 00:16:03.277 }, 00:16:03.277 { 00:16:03.277 "name": "BaseBdev3", 00:16:03.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.277 "is_configured": false, 00:16:03.277 "data_offset": 0, 00:16:03.277 "data_size": 0 00:16:03.277 } 00:16:03.277 ] 00:16:03.277 }' 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.277 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.869 [2024-11-19 10:10:17.852131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.869 [2024-11-19 10:10:17.852538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:03.869 [2024-11-19 10:10:17.852573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.869 [2024-11-19 10:10:17.852954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:03.869 BaseBdev3 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.869 [2024-11-19 10:10:17.858329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:03.869 [2024-11-19 10:10:17.858361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:03.869 [2024-11-19 10:10:17.858609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.869 [ 00:16:03.869 { 00:16:03.869 "name": "BaseBdev3", 00:16:03.869 "aliases": [ 00:16:03.869 "8fb2096b-8e2c-40b5-a490-06a98c533cc9" 00:16:03.869 ], 00:16:03.869 "product_name": "Malloc disk", 00:16:03.869 "block_size": 512, 00:16:03.869 "num_blocks": 65536, 00:16:03.869 "uuid": "8fb2096b-8e2c-40b5-a490-06a98c533cc9", 00:16:03.869 "assigned_rate_limits": { 00:16:03.869 "rw_ios_per_sec": 0, 00:16:03.869 "rw_mbytes_per_sec": 0, 00:16:03.869 "r_mbytes_per_sec": 0, 00:16:03.869 "w_mbytes_per_sec": 0 00:16:03.869 }, 00:16:03.869 "claimed": true, 00:16:03.869 "claim_type": "exclusive_write", 00:16:03.869 "zoned": false, 00:16:03.869 "supported_io_types": { 00:16:03.869 "read": true, 00:16:03.869 "write": true, 00:16:03.869 "unmap": true, 00:16:03.869 "flush": true, 00:16:03.869 "reset": true, 00:16:03.869 "nvme_admin": false, 00:16:03.869 "nvme_io": false, 00:16:03.869 "nvme_io_md": false, 00:16:03.869 "write_zeroes": true, 00:16:03.869 "zcopy": true, 00:16:03.869 "get_zone_info": false, 00:16:03.869 "zone_management": false, 00:16:03.869 "zone_append": false, 00:16:03.869 "compare": false, 00:16:03.869 "compare_and_write": false, 00:16:03.869 "abort": true, 00:16:03.869 "seek_hole": false, 00:16:03.869 "seek_data": false, 00:16:03.869 "copy": true, 00:16:03.869 "nvme_iov_md": false 00:16:03.869 }, 00:16:03.869 "memory_domains": [ 00:16:03.869 { 00:16:03.869 "dma_device_id": "system", 00:16:03.869 "dma_device_type": 1 00:16:03.869 }, 00:16:03.869 { 00:16:03.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.869 "dma_device_type": 2 00:16:03.869 } 00:16:03.869 ], 00:16:03.869 "driver_specific": {} 00:16:03.869 } 00:16:03.869 ] 00:16:03.869 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.870 "name": "Existed_Raid", 00:16:03.870 "uuid": "db561df0-00d8-42af-869d-d74c60afe900", 00:16:03.870 "strip_size_kb": 64, 00:16:03.870 "state": "online", 00:16:03.870 "raid_level": "raid5f", 00:16:03.870 "superblock": true, 00:16:03.870 "num_base_bdevs": 3, 00:16:03.870 "num_base_bdevs_discovered": 3, 00:16:03.870 "num_base_bdevs_operational": 3, 00:16:03.870 "base_bdevs_list": [ 00:16:03.870 { 00:16:03.870 "name": "BaseBdev1", 00:16:03.870 "uuid": "70e9c15d-8443-4e05-b590-9667c73eac7b", 00:16:03.870 "is_configured": true, 00:16:03.870 "data_offset": 2048, 00:16:03.870 "data_size": 63488 00:16:03.870 }, 00:16:03.870 { 00:16:03.870 "name": "BaseBdev2", 00:16:03.870 "uuid": "3ce0f2ba-8cd0-4509-a576-a537898fbd63", 00:16:03.870 "is_configured": true, 00:16:03.870 "data_offset": 2048, 00:16:03.870 "data_size": 63488 00:16:03.870 }, 00:16:03.870 { 00:16:03.870 "name": "BaseBdev3", 00:16:03.870 "uuid": "8fb2096b-8e2c-40b5-a490-06a98c533cc9", 00:16:03.870 "is_configured": true, 00:16:03.870 "data_offset": 2048, 00:16:03.870 "data_size": 63488 00:16:03.870 } 00:16:03.870 ] 00:16:03.870 }' 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.870 10:10:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.437 [2024-11-19 10:10:18.433127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.437 "name": "Existed_Raid", 00:16:04.437 "aliases": [ 00:16:04.437 "db561df0-00d8-42af-869d-d74c60afe900" 00:16:04.437 ], 00:16:04.437 "product_name": "Raid Volume", 00:16:04.437 "block_size": 512, 00:16:04.437 "num_blocks": 126976, 00:16:04.437 "uuid": "db561df0-00d8-42af-869d-d74c60afe900", 00:16:04.437 "assigned_rate_limits": { 00:16:04.437 "rw_ios_per_sec": 0, 00:16:04.437 "rw_mbytes_per_sec": 0, 00:16:04.437 "r_mbytes_per_sec": 0, 00:16:04.437 "w_mbytes_per_sec": 0 00:16:04.437 }, 00:16:04.437 "claimed": false, 00:16:04.437 "zoned": false, 00:16:04.437 "supported_io_types": { 00:16:04.437 "read": true, 00:16:04.437 "write": true, 00:16:04.437 "unmap": false, 00:16:04.437 "flush": false, 00:16:04.437 "reset": true, 00:16:04.437 "nvme_admin": false, 00:16:04.437 "nvme_io": false, 00:16:04.437 "nvme_io_md": false, 00:16:04.437 "write_zeroes": true, 00:16:04.437 "zcopy": false, 00:16:04.437 "get_zone_info": false, 00:16:04.437 "zone_management": false, 00:16:04.437 "zone_append": false, 00:16:04.437 "compare": false, 00:16:04.437 "compare_and_write": false, 00:16:04.437 "abort": false, 00:16:04.437 "seek_hole": false, 00:16:04.437 "seek_data": false, 00:16:04.437 "copy": false, 00:16:04.437 "nvme_iov_md": false 00:16:04.437 }, 00:16:04.437 "driver_specific": { 00:16:04.437 "raid": { 00:16:04.437 "uuid": "db561df0-00d8-42af-869d-d74c60afe900", 00:16:04.437 "strip_size_kb": 64, 00:16:04.437 "state": "online", 00:16:04.437 "raid_level": "raid5f", 00:16:04.437 "superblock": true, 00:16:04.437 "num_base_bdevs": 3, 00:16:04.437 "num_base_bdevs_discovered": 3, 00:16:04.437 "num_base_bdevs_operational": 3, 00:16:04.437 "base_bdevs_list": [ 00:16:04.437 { 00:16:04.437 "name": "BaseBdev1", 00:16:04.437 "uuid": "70e9c15d-8443-4e05-b590-9667c73eac7b", 00:16:04.437 "is_configured": true, 00:16:04.437 "data_offset": 2048, 00:16:04.437 "data_size": 63488 00:16:04.437 }, 00:16:04.437 { 00:16:04.437 "name": "BaseBdev2", 00:16:04.437 "uuid": "3ce0f2ba-8cd0-4509-a576-a537898fbd63", 00:16:04.437 "is_configured": true, 00:16:04.437 "data_offset": 2048, 00:16:04.437 "data_size": 63488 00:16:04.437 }, 00:16:04.437 { 00:16:04.437 "name": "BaseBdev3", 00:16:04.437 "uuid": "8fb2096b-8e2c-40b5-a490-06a98c533cc9", 00:16:04.437 "is_configured": true, 00:16:04.437 "data_offset": 2048, 00:16:04.437 "data_size": 63488 00:16:04.437 } 00:16:04.437 ] 00:16:04.437 } 00:16:04.437 } 00:16:04.437 }' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:04.437 BaseBdev2 00:16:04.437 BaseBdev3' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.437 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.696 [2024-11-19 10:10:18.753040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.696 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.697 "name": "Existed_Raid", 00:16:04.697 "uuid": "db561df0-00d8-42af-869d-d74c60afe900", 00:16:04.697 "strip_size_kb": 64, 00:16:04.697 "state": "online", 00:16:04.697 "raid_level": "raid5f", 00:16:04.697 "superblock": true, 00:16:04.697 "num_base_bdevs": 3, 00:16:04.697 "num_base_bdevs_discovered": 2, 00:16:04.697 "num_base_bdevs_operational": 2, 00:16:04.697 "base_bdevs_list": [ 00:16:04.697 { 00:16:04.697 "name": null, 00:16:04.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.697 "is_configured": false, 00:16:04.697 "data_offset": 0, 00:16:04.697 "data_size": 63488 00:16:04.697 }, 00:16:04.697 { 00:16:04.697 "name": "BaseBdev2", 00:16:04.697 "uuid": "3ce0f2ba-8cd0-4509-a576-a537898fbd63", 00:16:04.697 "is_configured": true, 00:16:04.697 "data_offset": 2048, 00:16:04.697 "data_size": 63488 00:16:04.697 }, 00:16:04.697 { 00:16:04.697 "name": "BaseBdev3", 00:16:04.697 "uuid": "8fb2096b-8e2c-40b5-a490-06a98c533cc9", 00:16:04.697 "is_configured": true, 00:16:04.697 "data_offset": 2048, 00:16:04.697 "data_size": 63488 00:16:04.697 } 00:16:04.697 ] 00:16:04.697 }' 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.697 10:10:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.262 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.262 [2024-11-19 10:10:19.406404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.262 [2024-11-19 10:10:19.406659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.520 [2024-11-19 10:10:19.500451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.520 [2024-11-19 10:10:19.564578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:05.520 [2024-11-19 10:10:19.564673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.520 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.780 BaseBdev2 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.780 [ 00:16:05.780 { 00:16:05.780 "name": "BaseBdev2", 00:16:05.780 "aliases": [ 00:16:05.780 "88b4e7bb-327c-47fc-b74a-e7312a6ef66a" 00:16:05.780 ], 00:16:05.780 "product_name": "Malloc disk", 00:16:05.780 "block_size": 512, 00:16:05.780 "num_blocks": 65536, 00:16:05.780 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:05.780 "assigned_rate_limits": { 00:16:05.780 "rw_ios_per_sec": 0, 00:16:05.780 "rw_mbytes_per_sec": 0, 00:16:05.780 "r_mbytes_per_sec": 0, 00:16:05.780 "w_mbytes_per_sec": 0 00:16:05.780 }, 00:16:05.780 "claimed": false, 00:16:05.780 "zoned": false, 00:16:05.780 "supported_io_types": { 00:16:05.780 "read": true, 00:16:05.780 "write": true, 00:16:05.780 "unmap": true, 00:16:05.780 "flush": true, 00:16:05.780 "reset": true, 00:16:05.780 "nvme_admin": false, 00:16:05.780 "nvme_io": false, 00:16:05.780 "nvme_io_md": false, 00:16:05.780 "write_zeroes": true, 00:16:05.780 "zcopy": true, 00:16:05.780 "get_zone_info": false, 00:16:05.780 "zone_management": false, 00:16:05.780 "zone_append": false, 00:16:05.780 "compare": false, 00:16:05.780 "compare_and_write": false, 00:16:05.780 "abort": true, 00:16:05.780 "seek_hole": false, 00:16:05.780 "seek_data": false, 00:16:05.780 "copy": true, 00:16:05.780 "nvme_iov_md": false 00:16:05.780 }, 00:16:05.780 "memory_domains": [ 00:16:05.780 { 00:16:05.780 "dma_device_id": "system", 00:16:05.780 "dma_device_type": 1 00:16:05.780 }, 00:16:05.780 { 00:16:05.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.780 "dma_device_type": 2 00:16:05.780 } 00:16:05.780 ], 00:16:05.780 "driver_specific": {} 00:16:05.780 } 00:16:05.780 ] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.780 BaseBdev3 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.780 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.781 [ 00:16:05.781 { 00:16:05.781 "name": "BaseBdev3", 00:16:05.781 "aliases": [ 00:16:05.781 "3b773027-ff24-4b43-a934-2d6ee1e3b6ee" 00:16:05.781 ], 00:16:05.781 "product_name": "Malloc disk", 00:16:05.781 "block_size": 512, 00:16:05.781 "num_blocks": 65536, 00:16:05.781 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:05.781 "assigned_rate_limits": { 00:16:05.781 "rw_ios_per_sec": 0, 00:16:05.781 "rw_mbytes_per_sec": 0, 00:16:05.781 "r_mbytes_per_sec": 0, 00:16:05.781 "w_mbytes_per_sec": 0 00:16:05.781 }, 00:16:05.781 "claimed": false, 00:16:05.781 "zoned": false, 00:16:05.781 "supported_io_types": { 00:16:05.781 "read": true, 00:16:05.781 "write": true, 00:16:05.781 "unmap": true, 00:16:05.781 "flush": true, 00:16:05.781 "reset": true, 00:16:05.781 "nvme_admin": false, 00:16:05.781 "nvme_io": false, 00:16:05.781 "nvme_io_md": false, 00:16:05.781 "write_zeroes": true, 00:16:05.781 "zcopy": true, 00:16:05.781 "get_zone_info": false, 00:16:05.781 "zone_management": false, 00:16:05.781 "zone_append": false, 00:16:05.781 "compare": false, 00:16:05.781 "compare_and_write": false, 00:16:05.781 "abort": true, 00:16:05.781 "seek_hole": false, 00:16:05.781 "seek_data": false, 00:16:05.781 "copy": true, 00:16:05.781 "nvme_iov_md": false 00:16:05.781 }, 00:16:05.781 "memory_domains": [ 00:16:05.781 { 00:16:05.781 "dma_device_id": "system", 00:16:05.781 "dma_device_type": 1 00:16:05.781 }, 00:16:05.781 { 00:16:05.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.781 "dma_device_type": 2 00:16:05.781 } 00:16:05.781 ], 00:16:05.781 "driver_specific": {} 00:16:05.781 } 00:16:05.781 ] 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.781 [2024-11-19 10:10:19.870481] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.781 [2024-11-19 10:10:19.870548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.781 [2024-11-19 10:10:19.870591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.781 [2024-11-19 10:10:19.873327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.781 "name": "Existed_Raid", 00:16:05.781 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:05.781 "strip_size_kb": 64, 00:16:05.781 "state": "configuring", 00:16:05.781 "raid_level": "raid5f", 00:16:05.781 "superblock": true, 00:16:05.781 "num_base_bdevs": 3, 00:16:05.781 "num_base_bdevs_discovered": 2, 00:16:05.781 "num_base_bdevs_operational": 3, 00:16:05.781 "base_bdevs_list": [ 00:16:05.781 { 00:16:05.781 "name": "BaseBdev1", 00:16:05.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.781 "is_configured": false, 00:16:05.781 "data_offset": 0, 00:16:05.781 "data_size": 0 00:16:05.781 }, 00:16:05.781 { 00:16:05.781 "name": "BaseBdev2", 00:16:05.781 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:05.781 "is_configured": true, 00:16:05.781 "data_offset": 2048, 00:16:05.781 "data_size": 63488 00:16:05.781 }, 00:16:05.781 { 00:16:05.781 "name": "BaseBdev3", 00:16:05.781 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:05.781 "is_configured": true, 00:16:05.781 "data_offset": 2048, 00:16:05.781 "data_size": 63488 00:16:05.781 } 00:16:05.781 ] 00:16:05.781 }' 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.781 10:10:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.349 [2024-11-19 10:10:20.390582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.349 "name": "Existed_Raid", 00:16:06.349 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:06.349 "strip_size_kb": 64, 00:16:06.349 "state": "configuring", 00:16:06.349 "raid_level": "raid5f", 00:16:06.349 "superblock": true, 00:16:06.349 "num_base_bdevs": 3, 00:16:06.349 "num_base_bdevs_discovered": 1, 00:16:06.349 "num_base_bdevs_operational": 3, 00:16:06.349 "base_bdevs_list": [ 00:16:06.349 { 00:16:06.349 "name": "BaseBdev1", 00:16:06.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.349 "is_configured": false, 00:16:06.349 "data_offset": 0, 00:16:06.349 "data_size": 0 00:16:06.349 }, 00:16:06.349 { 00:16:06.349 "name": null, 00:16:06.349 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:06.349 "is_configured": false, 00:16:06.349 "data_offset": 0, 00:16:06.349 "data_size": 63488 00:16:06.349 }, 00:16:06.349 { 00:16:06.349 "name": "BaseBdev3", 00:16:06.349 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:06.349 "is_configured": true, 00:16:06.349 "data_offset": 2048, 00:16:06.349 "data_size": 63488 00:16:06.349 } 00:16:06.349 ] 00:16:06.349 }' 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.349 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.915 10:10:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.915 [2024-11-19 10:10:21.036165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.915 BaseBdev1 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.915 [ 00:16:06.915 { 00:16:06.915 "name": "BaseBdev1", 00:16:06.915 "aliases": [ 00:16:06.915 "3e42094e-52e2-4191-b019-24697872af27" 00:16:06.915 ], 00:16:06.915 "product_name": "Malloc disk", 00:16:06.915 "block_size": 512, 00:16:06.915 "num_blocks": 65536, 00:16:06.915 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:06.915 "assigned_rate_limits": { 00:16:06.915 "rw_ios_per_sec": 0, 00:16:06.915 "rw_mbytes_per_sec": 0, 00:16:06.915 "r_mbytes_per_sec": 0, 00:16:06.915 "w_mbytes_per_sec": 0 00:16:06.915 }, 00:16:06.915 "claimed": true, 00:16:06.915 "claim_type": "exclusive_write", 00:16:06.915 "zoned": false, 00:16:06.915 "supported_io_types": { 00:16:06.915 "read": true, 00:16:06.915 "write": true, 00:16:06.915 "unmap": true, 00:16:06.915 "flush": true, 00:16:06.915 "reset": true, 00:16:06.915 "nvme_admin": false, 00:16:06.915 "nvme_io": false, 00:16:06.915 "nvme_io_md": false, 00:16:06.915 "write_zeroes": true, 00:16:06.915 "zcopy": true, 00:16:06.915 "get_zone_info": false, 00:16:06.915 "zone_management": false, 00:16:06.915 "zone_append": false, 00:16:06.915 "compare": false, 00:16:06.915 "compare_and_write": false, 00:16:06.915 "abort": true, 00:16:06.915 "seek_hole": false, 00:16:06.915 "seek_data": false, 00:16:06.915 "copy": true, 00:16:06.915 "nvme_iov_md": false 00:16:06.915 }, 00:16:06.915 "memory_domains": [ 00:16:06.915 { 00:16:06.915 "dma_device_id": "system", 00:16:06.915 "dma_device_type": 1 00:16:06.915 }, 00:16:06.915 { 00:16:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.915 "dma_device_type": 2 00:16:06.915 } 00:16:06.915 ], 00:16:06.915 "driver_specific": {} 00:16:06.915 } 00:16:06.915 ] 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.915 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.916 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.916 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.916 "name": "Existed_Raid", 00:16:06.916 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:06.916 "strip_size_kb": 64, 00:16:06.916 "state": "configuring", 00:16:06.916 "raid_level": "raid5f", 00:16:06.916 "superblock": true, 00:16:06.916 "num_base_bdevs": 3, 00:16:06.916 "num_base_bdevs_discovered": 2, 00:16:06.916 "num_base_bdevs_operational": 3, 00:16:06.916 "base_bdevs_list": [ 00:16:06.916 { 00:16:06.916 "name": "BaseBdev1", 00:16:06.916 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:06.916 "is_configured": true, 00:16:06.916 "data_offset": 2048, 00:16:06.916 "data_size": 63488 00:16:06.916 }, 00:16:06.916 { 00:16:06.916 "name": null, 00:16:06.916 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:06.916 "is_configured": false, 00:16:06.916 "data_offset": 0, 00:16:06.916 "data_size": 63488 00:16:06.916 }, 00:16:06.916 { 00:16:06.916 "name": "BaseBdev3", 00:16:06.916 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:06.916 "is_configured": true, 00:16:06.916 "data_offset": 2048, 00:16:06.916 "data_size": 63488 00:16:06.916 } 00:16:06.916 ] 00:16:06.916 }' 00:16:06.916 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.916 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.481 [2024-11-19 10:10:21.632714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.481 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.482 "name": "Existed_Raid", 00:16:07.482 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:07.482 "strip_size_kb": 64, 00:16:07.482 "state": "configuring", 00:16:07.482 "raid_level": "raid5f", 00:16:07.482 "superblock": true, 00:16:07.482 "num_base_bdevs": 3, 00:16:07.482 "num_base_bdevs_discovered": 1, 00:16:07.482 "num_base_bdevs_operational": 3, 00:16:07.482 "base_bdevs_list": [ 00:16:07.482 { 00:16:07.482 "name": "BaseBdev1", 00:16:07.482 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:07.482 "is_configured": true, 00:16:07.482 "data_offset": 2048, 00:16:07.482 "data_size": 63488 00:16:07.482 }, 00:16:07.482 { 00:16:07.482 "name": null, 00:16:07.482 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:07.482 "is_configured": false, 00:16:07.482 "data_offset": 0, 00:16:07.482 "data_size": 63488 00:16:07.482 }, 00:16:07.482 { 00:16:07.482 "name": null, 00:16:07.482 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:07.482 "is_configured": false, 00:16:07.482 "data_offset": 0, 00:16:07.482 "data_size": 63488 00:16:07.482 } 00:16:07.482 ] 00:16:07.482 }' 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.482 10:10:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.048 [2024-11-19 10:10:22.248931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.048 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.306 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.306 "name": "Existed_Raid", 00:16:08.306 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:08.306 "strip_size_kb": 64, 00:16:08.306 "state": "configuring", 00:16:08.306 "raid_level": "raid5f", 00:16:08.306 "superblock": true, 00:16:08.306 "num_base_bdevs": 3, 00:16:08.306 "num_base_bdevs_discovered": 2, 00:16:08.306 "num_base_bdevs_operational": 3, 00:16:08.306 "base_bdevs_list": [ 00:16:08.306 { 00:16:08.306 "name": "BaseBdev1", 00:16:08.306 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:08.306 "is_configured": true, 00:16:08.306 "data_offset": 2048, 00:16:08.306 "data_size": 63488 00:16:08.306 }, 00:16:08.306 { 00:16:08.306 "name": null, 00:16:08.306 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:08.306 "is_configured": false, 00:16:08.306 "data_offset": 0, 00:16:08.306 "data_size": 63488 00:16:08.306 }, 00:16:08.306 { 00:16:08.306 "name": "BaseBdev3", 00:16:08.306 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:08.306 "is_configured": true, 00:16:08.306 "data_offset": 2048, 00:16:08.306 "data_size": 63488 00:16:08.306 } 00:16:08.306 ] 00:16:08.306 }' 00:16:08.306 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.306 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.564 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.564 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.564 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.564 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.564 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.823 [2024-11-19 10:10:22.825090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.823 "name": "Existed_Raid", 00:16:08.823 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:08.823 "strip_size_kb": 64, 00:16:08.823 "state": "configuring", 00:16:08.823 "raid_level": "raid5f", 00:16:08.823 "superblock": true, 00:16:08.823 "num_base_bdevs": 3, 00:16:08.823 "num_base_bdevs_discovered": 1, 00:16:08.823 "num_base_bdevs_operational": 3, 00:16:08.823 "base_bdevs_list": [ 00:16:08.823 { 00:16:08.823 "name": null, 00:16:08.823 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:08.823 "is_configured": false, 00:16:08.823 "data_offset": 0, 00:16:08.823 "data_size": 63488 00:16:08.823 }, 00:16:08.823 { 00:16:08.823 "name": null, 00:16:08.823 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:08.823 "is_configured": false, 00:16:08.823 "data_offset": 0, 00:16:08.823 "data_size": 63488 00:16:08.823 }, 00:16:08.823 { 00:16:08.823 "name": "BaseBdev3", 00:16:08.823 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:08.823 "is_configured": true, 00:16:08.823 "data_offset": 2048, 00:16:08.823 "data_size": 63488 00:16:08.823 } 00:16:08.823 ] 00:16:08.823 }' 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.823 10:10:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.397 [2024-11-19 10:10:23.478965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.397 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.398 "name": "Existed_Raid", 00:16:09.398 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:09.398 "strip_size_kb": 64, 00:16:09.398 "state": "configuring", 00:16:09.398 "raid_level": "raid5f", 00:16:09.398 "superblock": true, 00:16:09.398 "num_base_bdevs": 3, 00:16:09.398 "num_base_bdevs_discovered": 2, 00:16:09.398 "num_base_bdevs_operational": 3, 00:16:09.398 "base_bdevs_list": [ 00:16:09.398 { 00:16:09.398 "name": null, 00:16:09.398 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:09.398 "is_configured": false, 00:16:09.398 "data_offset": 0, 00:16:09.398 "data_size": 63488 00:16:09.398 }, 00:16:09.398 { 00:16:09.398 "name": "BaseBdev2", 00:16:09.398 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:09.398 "is_configured": true, 00:16:09.398 "data_offset": 2048, 00:16:09.398 "data_size": 63488 00:16:09.398 }, 00:16:09.398 { 00:16:09.398 "name": "BaseBdev3", 00:16:09.398 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:09.398 "is_configured": true, 00:16:09.398 "data_offset": 2048, 00:16:09.398 "data_size": 63488 00:16:09.398 } 00:16:09.398 ] 00:16:09.398 }' 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.398 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.962 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.962 10:10:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.962 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.962 10:10:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3e42094e-52e2-4191-b019-24697872af27 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.962 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.962 [2024-11-19 10:10:24.133265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:09.962 [2024-11-19 10:10:24.133607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:09.962 [2024-11-19 10:10:24.133633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:09.963 [2024-11-19 10:10:24.134010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:09.963 NewBaseBdev 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.963 [2024-11-19 10:10:24.139176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:09.963 [2024-11-19 10:10:24.139204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:09.963 [2024-11-19 10:10:24.139425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.963 [ 00:16:09.963 { 00:16:09.963 "name": "NewBaseBdev", 00:16:09.963 "aliases": [ 00:16:09.963 "3e42094e-52e2-4191-b019-24697872af27" 00:16:09.963 ], 00:16:09.963 "product_name": "Malloc disk", 00:16:09.963 "block_size": 512, 00:16:09.963 "num_blocks": 65536, 00:16:09.963 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:09.963 "assigned_rate_limits": { 00:16:09.963 "rw_ios_per_sec": 0, 00:16:09.963 "rw_mbytes_per_sec": 0, 00:16:09.963 "r_mbytes_per_sec": 0, 00:16:09.963 "w_mbytes_per_sec": 0 00:16:09.963 }, 00:16:09.963 "claimed": true, 00:16:09.963 "claim_type": "exclusive_write", 00:16:09.963 "zoned": false, 00:16:09.963 "supported_io_types": { 00:16:09.963 "read": true, 00:16:09.963 "write": true, 00:16:09.963 "unmap": true, 00:16:09.963 "flush": true, 00:16:09.963 "reset": true, 00:16:09.963 "nvme_admin": false, 00:16:09.963 "nvme_io": false, 00:16:09.963 "nvme_io_md": false, 00:16:09.963 "write_zeroes": true, 00:16:09.963 "zcopy": true, 00:16:09.963 "get_zone_info": false, 00:16:09.963 "zone_management": false, 00:16:09.963 "zone_append": false, 00:16:09.963 "compare": false, 00:16:09.963 "compare_and_write": false, 00:16:09.963 "abort": true, 00:16:09.963 "seek_hole": false, 00:16:09.963 "seek_data": false, 00:16:09.963 "copy": true, 00:16:09.963 "nvme_iov_md": false 00:16:09.963 }, 00:16:09.963 "memory_domains": [ 00:16:09.963 { 00:16:09.963 "dma_device_id": "system", 00:16:09.963 "dma_device_type": 1 00:16:09.963 }, 00:16:09.963 { 00:16:09.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.963 "dma_device_type": 2 00:16:09.963 } 00:16:09.963 ], 00:16:09.963 "driver_specific": {} 00:16:09.963 } 00:16:09.963 ] 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.963 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.220 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.220 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.221 "name": "Existed_Raid", 00:16:10.221 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:10.221 "strip_size_kb": 64, 00:16:10.221 "state": "online", 00:16:10.221 "raid_level": "raid5f", 00:16:10.221 "superblock": true, 00:16:10.221 "num_base_bdevs": 3, 00:16:10.221 "num_base_bdevs_discovered": 3, 00:16:10.221 "num_base_bdevs_operational": 3, 00:16:10.221 "base_bdevs_list": [ 00:16:10.221 { 00:16:10.221 "name": "NewBaseBdev", 00:16:10.221 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:10.221 "is_configured": true, 00:16:10.221 "data_offset": 2048, 00:16:10.221 "data_size": 63488 00:16:10.221 }, 00:16:10.221 { 00:16:10.221 "name": "BaseBdev2", 00:16:10.221 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:10.221 "is_configured": true, 00:16:10.221 "data_offset": 2048, 00:16:10.221 "data_size": 63488 00:16:10.221 }, 00:16:10.221 { 00:16:10.221 "name": "BaseBdev3", 00:16:10.221 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:10.221 "is_configured": true, 00:16:10.221 "data_offset": 2048, 00:16:10.221 "data_size": 63488 00:16:10.221 } 00:16:10.221 ] 00:16:10.221 }' 00:16:10.221 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.221 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.479 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.737 [2024-11-19 10:10:24.713958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.737 "name": "Existed_Raid", 00:16:10.737 "aliases": [ 00:16:10.737 "b8286eeb-896d-4af2-9cfa-001feb8e8a57" 00:16:10.737 ], 00:16:10.737 "product_name": "Raid Volume", 00:16:10.737 "block_size": 512, 00:16:10.737 "num_blocks": 126976, 00:16:10.737 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:10.737 "assigned_rate_limits": { 00:16:10.737 "rw_ios_per_sec": 0, 00:16:10.737 "rw_mbytes_per_sec": 0, 00:16:10.737 "r_mbytes_per_sec": 0, 00:16:10.737 "w_mbytes_per_sec": 0 00:16:10.737 }, 00:16:10.737 "claimed": false, 00:16:10.737 "zoned": false, 00:16:10.737 "supported_io_types": { 00:16:10.737 "read": true, 00:16:10.737 "write": true, 00:16:10.737 "unmap": false, 00:16:10.737 "flush": false, 00:16:10.737 "reset": true, 00:16:10.737 "nvme_admin": false, 00:16:10.737 "nvme_io": false, 00:16:10.737 "nvme_io_md": false, 00:16:10.737 "write_zeroes": true, 00:16:10.737 "zcopy": false, 00:16:10.737 "get_zone_info": false, 00:16:10.737 "zone_management": false, 00:16:10.737 "zone_append": false, 00:16:10.737 "compare": false, 00:16:10.737 "compare_and_write": false, 00:16:10.737 "abort": false, 00:16:10.737 "seek_hole": false, 00:16:10.737 "seek_data": false, 00:16:10.737 "copy": false, 00:16:10.737 "nvme_iov_md": false 00:16:10.737 }, 00:16:10.737 "driver_specific": { 00:16:10.737 "raid": { 00:16:10.737 "uuid": "b8286eeb-896d-4af2-9cfa-001feb8e8a57", 00:16:10.737 "strip_size_kb": 64, 00:16:10.737 "state": "online", 00:16:10.737 "raid_level": "raid5f", 00:16:10.737 "superblock": true, 00:16:10.737 "num_base_bdevs": 3, 00:16:10.737 "num_base_bdevs_discovered": 3, 00:16:10.737 "num_base_bdevs_operational": 3, 00:16:10.737 "base_bdevs_list": [ 00:16:10.737 { 00:16:10.737 "name": "NewBaseBdev", 00:16:10.737 "uuid": "3e42094e-52e2-4191-b019-24697872af27", 00:16:10.737 "is_configured": true, 00:16:10.737 "data_offset": 2048, 00:16:10.737 "data_size": 63488 00:16:10.737 }, 00:16:10.737 { 00:16:10.737 "name": "BaseBdev2", 00:16:10.737 "uuid": "88b4e7bb-327c-47fc-b74a-e7312a6ef66a", 00:16:10.737 "is_configured": true, 00:16:10.737 "data_offset": 2048, 00:16:10.737 "data_size": 63488 00:16:10.737 }, 00:16:10.737 { 00:16:10.737 "name": "BaseBdev3", 00:16:10.737 "uuid": "3b773027-ff24-4b43-a934-2d6ee1e3b6ee", 00:16:10.737 "is_configured": true, 00:16:10.737 "data_offset": 2048, 00:16:10.737 "data_size": 63488 00:16:10.737 } 00:16:10.737 ] 00:16:10.737 } 00:16:10.737 } 00:16:10.737 }' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:10.737 BaseBdev2 00:16:10.737 BaseBdev3' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.737 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.996 10:10:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.996 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.996 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.996 10:10:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.996 [2024-11-19 10:10:25.049759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.996 [2024-11-19 10:10:25.049812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.996 [2024-11-19 10:10:25.049937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.996 [2024-11-19 10:10:25.050335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.996 [2024-11-19 10:10:25.050361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80841 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80841 ']' 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80841 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80841 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80841' 00:16:10.996 killing process with pid 80841 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80841 00:16:10.996 [2024-11-19 10:10:25.091638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.996 10:10:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80841 00:16:11.255 [2024-11-19 10:10:25.387674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.630 10:10:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:12.630 00:16:12.630 real 0m12.108s 00:16:12.630 user 0m19.823s 00:16:12.630 sys 0m1.841s 00:16:12.630 10:10:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.630 ************************************ 00:16:12.630 END TEST raid5f_state_function_test_sb 00:16:12.630 ************************************ 00:16:12.630 10:10:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.630 10:10:26 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:12.630 10:10:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:12.630 10:10:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.630 10:10:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.630 ************************************ 00:16:12.630 START TEST raid5f_superblock_test 00:16:12.630 ************************************ 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81469 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:12.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81469 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81469 ']' 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.630 10:10:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.630 [2024-11-19 10:10:26.699615] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:12.630 [2024-11-19 10:10:26.699878] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81469 ] 00:16:12.938 [2024-11-19 10:10:26.878376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.938 [2024-11-19 10:10:27.025331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.214 [2024-11-19 10:10:27.250628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.214 [2024-11-19 10:10:27.250719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.781 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.781 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:13.781 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:13.781 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.781 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 malloc1 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 [2024-11-19 10:10:27.774217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.782 [2024-11-19 10:10:27.774471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.782 [2024-11-19 10:10:27.774572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:13.782 [2024-11-19 10:10:27.774801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.782 [2024-11-19 10:10:27.778047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.782 [2024-11-19 10:10:27.778226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.782 pt1 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 malloc2 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 [2024-11-19 10:10:27.839771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:13.782 [2024-11-19 10:10:27.839874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.782 [2024-11-19 10:10:27.839912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:13.782 [2024-11-19 10:10:27.839929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.782 [2024-11-19 10:10:27.843060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.782 [2024-11-19 10:10:27.843242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:13.782 pt2 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 malloc3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 [2024-11-19 10:10:27.917997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:13.782 [2024-11-19 10:10:27.918085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.782 [2024-11-19 10:10:27.918125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:13.782 [2024-11-19 10:10:27.918143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.782 [2024-11-19 10:10:27.921400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.782 [2024-11-19 10:10:27.921450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:13.782 pt3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 [2024-11-19 10:10:27.930289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.782 [2024-11-19 10:10:27.933088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.782 [2024-11-19 10:10:27.933192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:13.782 [2024-11-19 10:10:27.933454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:13.782 [2024-11-19 10:10:27.933484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:13.782 [2024-11-19 10:10:27.933869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:13.782 [2024-11-19 10:10:27.939217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:13.782 [2024-11-19 10:10:27.939430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:13.782 [2024-11-19 10:10:27.939886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.782 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.782 "name": "raid_bdev1", 00:16:13.782 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:13.782 "strip_size_kb": 64, 00:16:13.782 "state": "online", 00:16:13.782 "raid_level": "raid5f", 00:16:13.782 "superblock": true, 00:16:13.782 "num_base_bdevs": 3, 00:16:13.782 "num_base_bdevs_discovered": 3, 00:16:13.782 "num_base_bdevs_operational": 3, 00:16:13.782 "base_bdevs_list": [ 00:16:13.782 { 00:16:13.782 "name": "pt1", 00:16:13.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.782 "is_configured": true, 00:16:13.782 "data_offset": 2048, 00:16:13.782 "data_size": 63488 00:16:13.782 }, 00:16:13.782 { 00:16:13.782 "name": "pt2", 00:16:13.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.782 "is_configured": true, 00:16:13.782 "data_offset": 2048, 00:16:13.782 "data_size": 63488 00:16:13.782 }, 00:16:13.782 { 00:16:13.782 "name": "pt3", 00:16:13.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:13.782 "is_configured": true, 00:16:13.782 "data_offset": 2048, 00:16:13.782 "data_size": 63488 00:16:13.783 } 00:16:13.783 ] 00:16:13.783 }' 00:16:13.783 10:10:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.783 10:10:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.349 [2024-11-19 10:10:28.414629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.349 "name": "raid_bdev1", 00:16:14.349 "aliases": [ 00:16:14.349 "d34e5a7f-a697-455d-92c6-ae67330d9aca" 00:16:14.349 ], 00:16:14.349 "product_name": "Raid Volume", 00:16:14.349 "block_size": 512, 00:16:14.349 "num_blocks": 126976, 00:16:14.349 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:14.349 "assigned_rate_limits": { 00:16:14.349 "rw_ios_per_sec": 0, 00:16:14.349 "rw_mbytes_per_sec": 0, 00:16:14.349 "r_mbytes_per_sec": 0, 00:16:14.349 "w_mbytes_per_sec": 0 00:16:14.349 }, 00:16:14.349 "claimed": false, 00:16:14.349 "zoned": false, 00:16:14.349 "supported_io_types": { 00:16:14.349 "read": true, 00:16:14.349 "write": true, 00:16:14.349 "unmap": false, 00:16:14.349 "flush": false, 00:16:14.349 "reset": true, 00:16:14.349 "nvme_admin": false, 00:16:14.349 "nvme_io": false, 00:16:14.349 "nvme_io_md": false, 00:16:14.349 "write_zeroes": true, 00:16:14.349 "zcopy": false, 00:16:14.349 "get_zone_info": false, 00:16:14.349 "zone_management": false, 00:16:14.349 "zone_append": false, 00:16:14.349 "compare": false, 00:16:14.349 "compare_and_write": false, 00:16:14.349 "abort": false, 00:16:14.349 "seek_hole": false, 00:16:14.349 "seek_data": false, 00:16:14.349 "copy": false, 00:16:14.349 "nvme_iov_md": false 00:16:14.349 }, 00:16:14.349 "driver_specific": { 00:16:14.349 "raid": { 00:16:14.349 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:14.349 "strip_size_kb": 64, 00:16:14.349 "state": "online", 00:16:14.349 "raid_level": "raid5f", 00:16:14.349 "superblock": true, 00:16:14.349 "num_base_bdevs": 3, 00:16:14.349 "num_base_bdevs_discovered": 3, 00:16:14.349 "num_base_bdevs_operational": 3, 00:16:14.349 "base_bdevs_list": [ 00:16:14.349 { 00:16:14.349 "name": "pt1", 00:16:14.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.349 "is_configured": true, 00:16:14.349 "data_offset": 2048, 00:16:14.349 "data_size": 63488 00:16:14.349 }, 00:16:14.349 { 00:16:14.349 "name": "pt2", 00:16:14.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.349 "is_configured": true, 00:16:14.349 "data_offset": 2048, 00:16:14.349 "data_size": 63488 00:16:14.349 }, 00:16:14.349 { 00:16:14.349 "name": "pt3", 00:16:14.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.349 "is_configured": true, 00:16:14.349 "data_offset": 2048, 00:16:14.349 "data_size": 63488 00:16:14.349 } 00:16:14.349 ] 00:16:14.349 } 00:16:14.349 } 00:16:14.349 }' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:14.349 pt2 00:16:14.349 pt3' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.349 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.606 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.606 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.606 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.606 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.606 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.607 [2024-11-19 10:10:28.718699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d34e5a7f-a697-455d-92c6-ae67330d9aca 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d34e5a7f-a697-455d-92c6-ae67330d9aca ']' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.607 [2024-11-19 10:10:28.770471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.607 [2024-11-19 10:10:28.770691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.607 [2024-11-19 10:10:28.770870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.607 [2024-11-19 10:10:28.770998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.607 [2024-11-19 10:10:28.771018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.607 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 [2024-11-19 10:10:28.914625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.865 [2024-11-19 10:10:28.917443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.865 [2024-11-19 10:10:28.917676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:14.865 [2024-11-19 10:10:28.917819] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.865 [2024-11-19 10:10:28.917919] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.865 [2024-11-19 10:10:28.917954] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:14.865 [2024-11-19 10:10:28.917983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.865 [2024-11-19 10:10:28.917998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:14.865 request: 00:16:14.865 { 00:16:14.865 "name": "raid_bdev1", 00:16:14.865 "raid_level": "raid5f", 00:16:14.865 "base_bdevs": [ 00:16:14.865 "malloc1", 00:16:14.865 "malloc2", 00:16:14.865 "malloc3" 00:16:14.865 ], 00:16:14.865 "strip_size_kb": 64, 00:16:14.865 "superblock": false, 00:16:14.865 "method": "bdev_raid_create", 00:16:14.865 "req_id": 1 00:16:14.865 } 00:16:14.865 Got JSON-RPC error response 00:16:14.865 response: 00:16:14.865 { 00:16:14.865 "code": -17, 00:16:14.865 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.865 } 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:14.865 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.866 [2024-11-19 10:10:28.982607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:14.866 [2024-11-19 10:10:28.982852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.866 [2024-11-19 10:10:28.982950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:14.866 [2024-11-19 10:10:28.983158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.866 [2024-11-19 10:10:28.986440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.866 [2024-11-19 10:10:28.986611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:14.866 [2024-11-19 10:10:28.986903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:14.866 [2024-11-19 10:10:28.987102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.866 pt1 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.866 10:10:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.866 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.866 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.866 "name": "raid_bdev1", 00:16:14.866 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:14.866 "strip_size_kb": 64, 00:16:14.866 "state": "configuring", 00:16:14.866 "raid_level": "raid5f", 00:16:14.866 "superblock": true, 00:16:14.866 "num_base_bdevs": 3, 00:16:14.866 "num_base_bdevs_discovered": 1, 00:16:14.866 "num_base_bdevs_operational": 3, 00:16:14.866 "base_bdevs_list": [ 00:16:14.866 { 00:16:14.866 "name": "pt1", 00:16:14.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.866 "is_configured": true, 00:16:14.866 "data_offset": 2048, 00:16:14.866 "data_size": 63488 00:16:14.866 }, 00:16:14.866 { 00:16:14.866 "name": null, 00:16:14.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.866 "is_configured": false, 00:16:14.866 "data_offset": 2048, 00:16:14.866 "data_size": 63488 00:16:14.866 }, 00:16:14.866 { 00:16:14.866 "name": null, 00:16:14.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.866 "is_configured": false, 00:16:14.866 "data_offset": 2048, 00:16:14.866 "data_size": 63488 00:16:14.866 } 00:16:14.866 ] 00:16:14.866 }' 00:16:14.866 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.866 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.446 [2024-11-19 10:10:29.483160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.446 [2024-11-19 10:10:29.483253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.446 [2024-11-19 10:10:29.483292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:15.446 [2024-11-19 10:10:29.483309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.446 [2024-11-19 10:10:29.483996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.446 [2024-11-19 10:10:29.484041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.446 [2024-11-19 10:10:29.484230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.446 [2024-11-19 10:10:29.484271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.446 pt2 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:15.446 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.447 [2024-11-19 10:10:29.491168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.447 "name": "raid_bdev1", 00:16:15.447 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:15.447 "strip_size_kb": 64, 00:16:15.447 "state": "configuring", 00:16:15.447 "raid_level": "raid5f", 00:16:15.447 "superblock": true, 00:16:15.447 "num_base_bdevs": 3, 00:16:15.447 "num_base_bdevs_discovered": 1, 00:16:15.447 "num_base_bdevs_operational": 3, 00:16:15.447 "base_bdevs_list": [ 00:16:15.447 { 00:16:15.447 "name": "pt1", 00:16:15.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.447 "is_configured": true, 00:16:15.447 "data_offset": 2048, 00:16:15.447 "data_size": 63488 00:16:15.447 }, 00:16:15.447 { 00:16:15.447 "name": null, 00:16:15.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.447 "is_configured": false, 00:16:15.447 "data_offset": 0, 00:16:15.447 "data_size": 63488 00:16:15.447 }, 00:16:15.447 { 00:16:15.447 "name": null, 00:16:15.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.447 "is_configured": false, 00:16:15.447 "data_offset": 2048, 00:16:15.447 "data_size": 63488 00:16:15.447 } 00:16:15.447 ] 00:16:15.447 }' 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.447 10:10:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.014 [2024-11-19 10:10:30.019262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.014 [2024-11-19 10:10:30.019563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.014 [2024-11-19 10:10:30.019742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:16.014 [2024-11-19 10:10:30.019924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.014 [2024-11-19 10:10:30.020766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.014 [2024-11-19 10:10:30.020962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.014 [2024-11-19 10:10:30.021232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:16.014 [2024-11-19 10:10:30.021410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.014 pt2 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.014 [2024-11-19 10:10:30.031263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.014 [2024-11-19 10:10:30.031345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.014 [2024-11-19 10:10:30.031375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:16.014 [2024-11-19 10:10:30.031394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.014 [2024-11-19 10:10:30.032058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.014 [2024-11-19 10:10:30.032109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.014 [2024-11-19 10:10:30.032259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:16.014 [2024-11-19 10:10:30.032309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.014 [2024-11-19 10:10:30.032517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:16.014 [2024-11-19 10:10:30.032541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:16.014 [2024-11-19 10:10:30.032933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:16.014 [2024-11-19 10:10:30.038058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:16.014 [2024-11-19 10:10:30.038250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:16.014 [2024-11-19 10:10:30.038598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.014 pt3 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.014 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.015 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.015 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.015 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.015 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.015 "name": "raid_bdev1", 00:16:16.015 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:16.015 "strip_size_kb": 64, 00:16:16.015 "state": "online", 00:16:16.015 "raid_level": "raid5f", 00:16:16.015 "superblock": true, 00:16:16.015 "num_base_bdevs": 3, 00:16:16.015 "num_base_bdevs_discovered": 3, 00:16:16.015 "num_base_bdevs_operational": 3, 00:16:16.015 "base_bdevs_list": [ 00:16:16.015 { 00:16:16.015 "name": "pt1", 00:16:16.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.015 "is_configured": true, 00:16:16.015 "data_offset": 2048, 00:16:16.015 "data_size": 63488 00:16:16.015 }, 00:16:16.015 { 00:16:16.015 "name": "pt2", 00:16:16.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.015 "is_configured": true, 00:16:16.015 "data_offset": 2048, 00:16:16.015 "data_size": 63488 00:16:16.015 }, 00:16:16.015 { 00:16:16.015 "name": "pt3", 00:16:16.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.015 "is_configured": true, 00:16:16.015 "data_offset": 2048, 00:16:16.015 "data_size": 63488 00:16:16.015 } 00:16:16.015 ] 00:16:16.015 }' 00:16:16.015 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.015 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.581 [2024-11-19 10:10:30.609249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.581 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.581 "name": "raid_bdev1", 00:16:16.581 "aliases": [ 00:16:16.581 "d34e5a7f-a697-455d-92c6-ae67330d9aca" 00:16:16.581 ], 00:16:16.581 "product_name": "Raid Volume", 00:16:16.581 "block_size": 512, 00:16:16.581 "num_blocks": 126976, 00:16:16.581 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:16.581 "assigned_rate_limits": { 00:16:16.581 "rw_ios_per_sec": 0, 00:16:16.581 "rw_mbytes_per_sec": 0, 00:16:16.581 "r_mbytes_per_sec": 0, 00:16:16.581 "w_mbytes_per_sec": 0 00:16:16.581 }, 00:16:16.581 "claimed": false, 00:16:16.581 "zoned": false, 00:16:16.581 "supported_io_types": { 00:16:16.581 "read": true, 00:16:16.581 "write": true, 00:16:16.581 "unmap": false, 00:16:16.581 "flush": false, 00:16:16.581 "reset": true, 00:16:16.581 "nvme_admin": false, 00:16:16.581 "nvme_io": false, 00:16:16.581 "nvme_io_md": false, 00:16:16.581 "write_zeroes": true, 00:16:16.581 "zcopy": false, 00:16:16.581 "get_zone_info": false, 00:16:16.581 "zone_management": false, 00:16:16.581 "zone_append": false, 00:16:16.581 "compare": false, 00:16:16.582 "compare_and_write": false, 00:16:16.582 "abort": false, 00:16:16.582 "seek_hole": false, 00:16:16.582 "seek_data": false, 00:16:16.582 "copy": false, 00:16:16.582 "nvme_iov_md": false 00:16:16.582 }, 00:16:16.582 "driver_specific": { 00:16:16.582 "raid": { 00:16:16.582 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:16.582 "strip_size_kb": 64, 00:16:16.582 "state": "online", 00:16:16.582 "raid_level": "raid5f", 00:16:16.582 "superblock": true, 00:16:16.582 "num_base_bdevs": 3, 00:16:16.582 "num_base_bdevs_discovered": 3, 00:16:16.582 "num_base_bdevs_operational": 3, 00:16:16.582 "base_bdevs_list": [ 00:16:16.582 { 00:16:16.582 "name": "pt1", 00:16:16.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.582 "is_configured": true, 00:16:16.582 "data_offset": 2048, 00:16:16.582 "data_size": 63488 00:16:16.582 }, 00:16:16.582 { 00:16:16.582 "name": "pt2", 00:16:16.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.582 "is_configured": true, 00:16:16.582 "data_offset": 2048, 00:16:16.582 "data_size": 63488 00:16:16.582 }, 00:16:16.582 { 00:16:16.582 "name": "pt3", 00:16:16.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.582 "is_configured": true, 00:16:16.582 "data_offset": 2048, 00:16:16.582 "data_size": 63488 00:16:16.582 } 00:16:16.582 ] 00:16:16.582 } 00:16:16.582 } 00:16:16.582 }' 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.582 pt2 00:16:16.582 pt3' 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.582 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.841 [2024-11-19 10:10:30.945310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d34e5a7f-a697-455d-92c6-ae67330d9aca '!=' d34e5a7f-a697-455d-92c6-ae67330d9aca ']' 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.841 [2024-11-19 10:10:30.993172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.841 10:10:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.841 "name": "raid_bdev1", 00:16:16.841 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:16.841 "strip_size_kb": 64, 00:16:16.841 "state": "online", 00:16:16.841 "raid_level": "raid5f", 00:16:16.841 "superblock": true, 00:16:16.841 "num_base_bdevs": 3, 00:16:16.841 "num_base_bdevs_discovered": 2, 00:16:16.841 "num_base_bdevs_operational": 2, 00:16:16.841 "base_bdevs_list": [ 00:16:16.841 { 00:16:16.841 "name": null, 00:16:16.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.841 "is_configured": false, 00:16:16.841 "data_offset": 0, 00:16:16.841 "data_size": 63488 00:16:16.841 }, 00:16:16.841 { 00:16:16.841 "name": "pt2", 00:16:16.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.841 "is_configured": true, 00:16:16.841 "data_offset": 2048, 00:16:16.841 "data_size": 63488 00:16:16.841 }, 00:16:16.841 { 00:16:16.841 "name": "pt3", 00:16:16.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.841 "is_configured": true, 00:16:16.841 "data_offset": 2048, 00:16:16.841 "data_size": 63488 00:16:16.841 } 00:16:16.841 ] 00:16:16.841 }' 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.841 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 [2024-11-19 10:10:31.505226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.409 [2024-11-19 10:10:31.505268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.409 [2024-11-19 10:10:31.505389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.409 [2024-11-19 10:10:31.505481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.409 [2024-11-19 10:10:31.505507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 [2024-11-19 10:10:31.589233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.409 [2024-11-19 10:10:31.589340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.409 [2024-11-19 10:10:31.589372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:17.409 [2024-11-19 10:10:31.589392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.409 [2024-11-19 10:10:31.593097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.409 [2024-11-19 10:10:31.593161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.409 [2024-11-19 10:10:31.593302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:17.409 [2024-11-19 10:10:31.593380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.409 pt2 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.668 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.668 "name": "raid_bdev1", 00:16:17.668 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:17.668 "strip_size_kb": 64, 00:16:17.668 "state": "configuring", 00:16:17.668 "raid_level": "raid5f", 00:16:17.668 "superblock": true, 00:16:17.668 "num_base_bdevs": 3, 00:16:17.668 "num_base_bdevs_discovered": 1, 00:16:17.668 "num_base_bdevs_operational": 2, 00:16:17.668 "base_bdevs_list": [ 00:16:17.668 { 00:16:17.668 "name": null, 00:16:17.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.668 "is_configured": false, 00:16:17.668 "data_offset": 2048, 00:16:17.668 "data_size": 63488 00:16:17.668 }, 00:16:17.668 { 00:16:17.668 "name": "pt2", 00:16:17.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.668 "is_configured": true, 00:16:17.668 "data_offset": 2048, 00:16:17.668 "data_size": 63488 00:16:17.668 }, 00:16:17.668 { 00:16:17.668 "name": null, 00:16:17.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.668 "is_configured": false, 00:16:17.668 "data_offset": 2048, 00:16:17.668 "data_size": 63488 00:16:17.668 } 00:16:17.668 ] 00:16:17.668 }' 00:16:17.668 10:10:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.668 10:10:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.927 [2024-11-19 10:10:32.105520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:17.927 [2024-11-19 10:10:32.105617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.927 [2024-11-19 10:10:32.105658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:17.927 [2024-11-19 10:10:32.105678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.927 [2024-11-19 10:10:32.106375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.927 [2024-11-19 10:10:32.106418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:17.927 [2024-11-19 10:10:32.106533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:17.927 [2024-11-19 10:10:32.106584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:17.927 [2024-11-19 10:10:32.106746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:17.927 [2024-11-19 10:10:32.106768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:17.927 [2024-11-19 10:10:32.107119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:17.927 [2024-11-19 10:10:32.112141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:17.927 [2024-11-19 10:10:32.112312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:17.927 [2024-11-19 10:10:32.112763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.927 pt3 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.927 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.928 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.185 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.185 "name": "raid_bdev1", 00:16:18.185 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:18.185 "strip_size_kb": 64, 00:16:18.185 "state": "online", 00:16:18.185 "raid_level": "raid5f", 00:16:18.186 "superblock": true, 00:16:18.186 "num_base_bdevs": 3, 00:16:18.186 "num_base_bdevs_discovered": 2, 00:16:18.186 "num_base_bdevs_operational": 2, 00:16:18.186 "base_bdevs_list": [ 00:16:18.186 { 00:16:18.186 "name": null, 00:16:18.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.186 "is_configured": false, 00:16:18.186 "data_offset": 2048, 00:16:18.186 "data_size": 63488 00:16:18.186 }, 00:16:18.186 { 00:16:18.186 "name": "pt2", 00:16:18.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.186 "is_configured": true, 00:16:18.186 "data_offset": 2048, 00:16:18.186 "data_size": 63488 00:16:18.186 }, 00:16:18.186 { 00:16:18.186 "name": "pt3", 00:16:18.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.186 "is_configured": true, 00:16:18.186 "data_offset": 2048, 00:16:18.186 "data_size": 63488 00:16:18.186 } 00:16:18.186 ] 00:16:18.186 }' 00:16:18.186 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.186 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.444 [2024-11-19 10:10:32.626932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.444 [2024-11-19 10:10:32.627150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.444 [2024-11-19 10:10:32.627409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.444 [2024-11-19 10:10:32.627633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.444 [2024-11-19 10:10:32.627773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.444 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.702 [2024-11-19 10:10:32.695030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.702 [2024-11-19 10:10:32.695138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.702 [2024-11-19 10:10:32.695173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:18.702 [2024-11-19 10:10:32.695190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.702 [2024-11-19 10:10:32.698390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.702 [2024-11-19 10:10:32.698618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.702 [2024-11-19 10:10:32.698774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.702 [2024-11-19 10:10:32.698871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.702 [2024-11-19 10:10:32.699069] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:18.702 [2024-11-19 10:10:32.699089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.702 [2024-11-19 10:10:32.699115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:18.702 [2024-11-19 10:10:32.699191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.702 pt1 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.702 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.703 "name": "raid_bdev1", 00:16:18.703 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:18.703 "strip_size_kb": 64, 00:16:18.703 "state": "configuring", 00:16:18.703 "raid_level": "raid5f", 00:16:18.703 "superblock": true, 00:16:18.703 "num_base_bdevs": 3, 00:16:18.703 "num_base_bdevs_discovered": 1, 00:16:18.703 "num_base_bdevs_operational": 2, 00:16:18.703 "base_bdevs_list": [ 00:16:18.703 { 00:16:18.703 "name": null, 00:16:18.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.703 "is_configured": false, 00:16:18.703 "data_offset": 2048, 00:16:18.703 "data_size": 63488 00:16:18.703 }, 00:16:18.703 { 00:16:18.703 "name": "pt2", 00:16:18.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.703 "is_configured": true, 00:16:18.703 "data_offset": 2048, 00:16:18.703 "data_size": 63488 00:16:18.703 }, 00:16:18.703 { 00:16:18.703 "name": null, 00:16:18.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.703 "is_configured": false, 00:16:18.703 "data_offset": 2048, 00:16:18.703 "data_size": 63488 00:16:18.703 } 00:16:18.703 ] 00:16:18.703 }' 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.703 10:10:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 [2024-11-19 10:10:33.275393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.270 [2024-11-19 10:10:33.275486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.270 [2024-11-19 10:10:33.275524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:19.270 [2024-11-19 10:10:33.275540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.270 [2024-11-19 10:10:33.276251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.270 [2024-11-19 10:10:33.276285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.270 [2024-11-19 10:10:33.276417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:19.270 [2024-11-19 10:10:33.276452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.270 [2024-11-19 10:10:33.276625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:19.270 [2024-11-19 10:10:33.276642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:19.270 [2024-11-19 10:10:33.276995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:19.270 [2024-11-19 10:10:33.282065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:19.270 [2024-11-19 10:10:33.282123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:19.270 [2024-11-19 10:10:33.282482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.270 pt3 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.270 "name": "raid_bdev1", 00:16:19.270 "uuid": "d34e5a7f-a697-455d-92c6-ae67330d9aca", 00:16:19.270 "strip_size_kb": 64, 00:16:19.270 "state": "online", 00:16:19.270 "raid_level": "raid5f", 00:16:19.270 "superblock": true, 00:16:19.270 "num_base_bdevs": 3, 00:16:19.270 "num_base_bdevs_discovered": 2, 00:16:19.270 "num_base_bdevs_operational": 2, 00:16:19.270 "base_bdevs_list": [ 00:16:19.270 { 00:16:19.270 "name": null, 00:16:19.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.270 "is_configured": false, 00:16:19.270 "data_offset": 2048, 00:16:19.270 "data_size": 63488 00:16:19.270 }, 00:16:19.270 { 00:16:19.270 "name": "pt2", 00:16:19.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.270 "is_configured": true, 00:16:19.270 "data_offset": 2048, 00:16:19.270 "data_size": 63488 00:16:19.270 }, 00:16:19.270 { 00:16:19.270 "name": "pt3", 00:16:19.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.270 "is_configured": true, 00:16:19.270 "data_offset": 2048, 00:16:19.270 "data_size": 63488 00:16:19.270 } 00:16:19.270 ] 00:16:19.270 }' 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.270 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:19.838 [2024-11-19 10:10:33.889146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d34e5a7f-a697-455d-92c6-ae67330d9aca '!=' d34e5a7f-a697-455d-92c6-ae67330d9aca ']' 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81469 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81469 ']' 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81469 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81469 00:16:19.838 killing process with pid 81469 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81469' 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81469 00:16:19.838 [2024-11-19 10:10:33.980307] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.838 10:10:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81469 00:16:19.838 [2024-11-19 10:10:33.980457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.838 [2024-11-19 10:10:33.980559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.838 [2024-11-19 10:10:33.980580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:20.096 [2024-11-19 10:10:34.279325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.472 10:10:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:21.472 00:16:21.472 real 0m8.814s 00:16:21.472 user 0m14.173s 00:16:21.472 sys 0m1.344s 00:16:21.472 10:10:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.472 10:10:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.472 ************************************ 00:16:21.472 END TEST raid5f_superblock_test 00:16:21.472 ************************************ 00:16:21.472 10:10:35 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:21.472 10:10:35 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:21.472 10:10:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:21.472 10:10:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.472 10:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.472 ************************************ 00:16:21.472 START TEST raid5f_rebuild_test 00:16:21.472 ************************************ 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81927 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81927 00:16:21.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81927 ']' 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.472 10:10:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.472 [2024-11-19 10:10:35.575215] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:21.472 [2024-11-19 10:10:35.575674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81927 ] 00:16:21.472 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:21.472 Zero copy mechanism will not be used. 00:16:21.730 [2024-11-19 10:10:35.767402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.730 [2024-11-19 10:10:35.942117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.989 [2024-11-19 10:10:36.170070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.989 [2024-11-19 10:10:36.170131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.555 BaseBdev1_malloc 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.555 [2024-11-19 10:10:36.709706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.555 [2024-11-19 10:10:36.710038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.555 [2024-11-19 10:10:36.710089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:22.555 [2024-11-19 10:10:36.710112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.555 [2024-11-19 10:10:36.713285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.555 [2024-11-19 10:10:36.713476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.555 BaseBdev1 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.555 BaseBdev2_malloc 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.555 [2024-11-19 10:10:36.770385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:22.555 [2024-11-19 10:10:36.770492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.555 [2024-11-19 10:10:36.770529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:22.555 [2024-11-19 10:10:36.770552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.555 [2024-11-19 10:10:36.773671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.555 [2024-11-19 10:10:36.773728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.555 BaseBdev2 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.555 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 BaseBdev3_malloc 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 [2024-11-19 10:10:36.838577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:22.814 [2024-11-19 10:10:36.838677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.814 [2024-11-19 10:10:36.838718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:22.814 [2024-11-19 10:10:36.838739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.814 [2024-11-19 10:10:36.841948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.814 [2024-11-19 10:10:36.842007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:22.814 BaseBdev3 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 spare_malloc 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 spare_delay 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 [2024-11-19 10:10:36.911774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.814 [2024-11-19 10:10:36.911900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.814 [2024-11-19 10:10:36.911939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:22.815 [2024-11-19 10:10:36.911959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.815 [2024-11-19 10:10:36.915300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.815 [2024-11-19 10:10:36.915370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.815 spare 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.815 [2024-11-19 10:10:36.923949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.815 [2024-11-19 10:10:36.926753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.815 [2024-11-19 10:10:36.926906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.815 [2024-11-19 10:10:36.927068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:22.815 [2024-11-19 10:10:36.927088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:22.815 [2024-11-19 10:10:36.927519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:22.815 [2024-11-19 10:10:36.933056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:22.815 [2024-11-19 10:10:36.933249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:22.815 [2024-11-19 10:10:36.933748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.815 "name": "raid_bdev1", 00:16:22.815 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:22.815 "strip_size_kb": 64, 00:16:22.815 "state": "online", 00:16:22.815 "raid_level": "raid5f", 00:16:22.815 "superblock": false, 00:16:22.815 "num_base_bdevs": 3, 00:16:22.815 "num_base_bdevs_discovered": 3, 00:16:22.815 "num_base_bdevs_operational": 3, 00:16:22.815 "base_bdevs_list": [ 00:16:22.815 { 00:16:22.815 "name": "BaseBdev1", 00:16:22.815 "uuid": "b9fe260d-87d2-5686-beec-8bfe6699654b", 00:16:22.815 "is_configured": true, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 65536 00:16:22.815 }, 00:16:22.815 { 00:16:22.815 "name": "BaseBdev2", 00:16:22.815 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:22.815 "is_configured": true, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 65536 00:16:22.815 }, 00:16:22.815 { 00:16:22.815 "name": "BaseBdev3", 00:16:22.815 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:22.815 "is_configured": true, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 65536 00:16:22.815 } 00:16:22.815 ] 00:16:22.815 }' 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.815 10:10:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.384 [2024-11-19 10:10:37.504618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.384 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:23.642 [2024-11-19 10:10:37.860866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:23.901 /dev/nbd0 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.901 1+0 records in 00:16:23.901 1+0 records out 00:16:23.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408705 s, 10.0 MB/s 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:23.901 10:10:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:24.467 512+0 records in 00:16:24.467 512+0 records out 00:16:24.467 67108864 bytes (67 MB, 64 MiB) copied, 0.509072 s, 132 MB/s 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.467 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.724 [2024-11-19 10:10:38.781623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.724 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.724 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.724 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.725 [2024-11-19 10:10:38.795964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.725 "name": "raid_bdev1", 00:16:24.725 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:24.725 "strip_size_kb": 64, 00:16:24.725 "state": "online", 00:16:24.725 "raid_level": "raid5f", 00:16:24.725 "superblock": false, 00:16:24.725 "num_base_bdevs": 3, 00:16:24.725 "num_base_bdevs_discovered": 2, 00:16:24.725 "num_base_bdevs_operational": 2, 00:16:24.725 "base_bdevs_list": [ 00:16:24.725 { 00:16:24.725 "name": null, 00:16:24.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.725 "is_configured": false, 00:16:24.725 "data_offset": 0, 00:16:24.725 "data_size": 65536 00:16:24.725 }, 00:16:24.725 { 00:16:24.725 "name": "BaseBdev2", 00:16:24.725 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:24.725 "is_configured": true, 00:16:24.725 "data_offset": 0, 00:16:24.725 "data_size": 65536 00:16:24.725 }, 00:16:24.725 { 00:16:24.725 "name": "BaseBdev3", 00:16:24.725 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:24.725 "is_configured": true, 00:16:24.725 "data_offset": 0, 00:16:24.725 "data_size": 65536 00:16:24.725 } 00:16:24.725 ] 00:16:24.725 }' 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.725 10:10:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.291 10:10:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.291 10:10:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.291 10:10:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.291 [2024-11-19 10:10:39.328194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.291 [2024-11-19 10:10:39.344964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:25.291 10:10:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.291 10:10:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:25.291 [2024-11-19 10:10:39.353175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.225 "name": "raid_bdev1", 00:16:26.225 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:26.225 "strip_size_kb": 64, 00:16:26.225 "state": "online", 00:16:26.225 "raid_level": "raid5f", 00:16:26.225 "superblock": false, 00:16:26.225 "num_base_bdevs": 3, 00:16:26.225 "num_base_bdevs_discovered": 3, 00:16:26.225 "num_base_bdevs_operational": 3, 00:16:26.225 "process": { 00:16:26.225 "type": "rebuild", 00:16:26.225 "target": "spare", 00:16:26.225 "progress": { 00:16:26.225 "blocks": 18432, 00:16:26.225 "percent": 14 00:16:26.225 } 00:16:26.225 }, 00:16:26.225 "base_bdevs_list": [ 00:16:26.225 { 00:16:26.225 "name": "spare", 00:16:26.225 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:26.225 "is_configured": true, 00:16:26.225 "data_offset": 0, 00:16:26.225 "data_size": 65536 00:16:26.225 }, 00:16:26.225 { 00:16:26.225 "name": "BaseBdev2", 00:16:26.225 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:26.225 "is_configured": true, 00:16:26.225 "data_offset": 0, 00:16:26.225 "data_size": 65536 00:16:26.225 }, 00:16:26.225 { 00:16:26.225 "name": "BaseBdev3", 00:16:26.225 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:26.225 "is_configured": true, 00:16:26.225 "data_offset": 0, 00:16:26.225 "data_size": 65536 00:16:26.225 } 00:16:26.225 ] 00:16:26.225 }' 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.225 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.484 [2024-11-19 10:10:40.504192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.484 [2024-11-19 10:10:40.572683] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.484 [2024-11-19 10:10:40.572836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.484 [2024-11-19 10:10:40.572873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.484 [2024-11-19 10:10:40.572886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.484 "name": "raid_bdev1", 00:16:26.484 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:26.484 "strip_size_kb": 64, 00:16:26.484 "state": "online", 00:16:26.484 "raid_level": "raid5f", 00:16:26.484 "superblock": false, 00:16:26.484 "num_base_bdevs": 3, 00:16:26.484 "num_base_bdevs_discovered": 2, 00:16:26.484 "num_base_bdevs_operational": 2, 00:16:26.484 "base_bdevs_list": [ 00:16:26.484 { 00:16:26.484 "name": null, 00:16:26.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.484 "is_configured": false, 00:16:26.484 "data_offset": 0, 00:16:26.484 "data_size": 65536 00:16:26.484 }, 00:16:26.484 { 00:16:26.484 "name": "BaseBdev2", 00:16:26.484 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:26.484 "is_configured": true, 00:16:26.484 "data_offset": 0, 00:16:26.484 "data_size": 65536 00:16:26.484 }, 00:16:26.484 { 00:16:26.484 "name": "BaseBdev3", 00:16:26.484 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:26.484 "is_configured": true, 00:16:26.484 "data_offset": 0, 00:16:26.484 "data_size": 65536 00:16:26.484 } 00:16:26.484 ] 00:16:26.484 }' 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.484 10:10:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.050 "name": "raid_bdev1", 00:16:27.050 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:27.050 "strip_size_kb": 64, 00:16:27.050 "state": "online", 00:16:27.050 "raid_level": "raid5f", 00:16:27.050 "superblock": false, 00:16:27.050 "num_base_bdevs": 3, 00:16:27.050 "num_base_bdevs_discovered": 2, 00:16:27.050 "num_base_bdevs_operational": 2, 00:16:27.050 "base_bdevs_list": [ 00:16:27.050 { 00:16:27.050 "name": null, 00:16:27.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.050 "is_configured": false, 00:16:27.050 "data_offset": 0, 00:16:27.050 "data_size": 65536 00:16:27.050 }, 00:16:27.050 { 00:16:27.050 "name": "BaseBdev2", 00:16:27.050 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:27.050 "is_configured": true, 00:16:27.050 "data_offset": 0, 00:16:27.050 "data_size": 65536 00:16:27.050 }, 00:16:27.050 { 00:16:27.050 "name": "BaseBdev3", 00:16:27.050 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:27.050 "is_configured": true, 00:16:27.050 "data_offset": 0, 00:16:27.050 "data_size": 65536 00:16:27.050 } 00:16:27.050 ] 00:16:27.050 }' 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.050 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.309 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.309 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.309 10:10:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.309 10:10:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.309 [2024-11-19 10:10:41.295311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.309 [2024-11-19 10:10:41.311381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:27.309 10:10:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.309 10:10:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:27.309 [2024-11-19 10:10:41.319296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.243 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.243 "name": "raid_bdev1", 00:16:28.243 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:28.243 "strip_size_kb": 64, 00:16:28.243 "state": "online", 00:16:28.243 "raid_level": "raid5f", 00:16:28.243 "superblock": false, 00:16:28.243 "num_base_bdevs": 3, 00:16:28.243 "num_base_bdevs_discovered": 3, 00:16:28.243 "num_base_bdevs_operational": 3, 00:16:28.243 "process": { 00:16:28.243 "type": "rebuild", 00:16:28.243 "target": "spare", 00:16:28.243 "progress": { 00:16:28.243 "blocks": 18432, 00:16:28.243 "percent": 14 00:16:28.243 } 00:16:28.243 }, 00:16:28.243 "base_bdevs_list": [ 00:16:28.243 { 00:16:28.243 "name": "spare", 00:16:28.243 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:28.243 "is_configured": true, 00:16:28.243 "data_offset": 0, 00:16:28.243 "data_size": 65536 00:16:28.243 }, 00:16:28.243 { 00:16:28.243 "name": "BaseBdev2", 00:16:28.243 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:28.243 "is_configured": true, 00:16:28.243 "data_offset": 0, 00:16:28.243 "data_size": 65536 00:16:28.243 }, 00:16:28.243 { 00:16:28.243 "name": "BaseBdev3", 00:16:28.243 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:28.243 "is_configured": true, 00:16:28.243 "data_offset": 0, 00:16:28.243 "data_size": 65536 00:16:28.244 } 00:16:28.244 ] 00:16:28.244 }' 00:16:28.244 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.244 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.244 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.502 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.502 "name": "raid_bdev1", 00:16:28.502 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:28.502 "strip_size_kb": 64, 00:16:28.502 "state": "online", 00:16:28.502 "raid_level": "raid5f", 00:16:28.502 "superblock": false, 00:16:28.502 "num_base_bdevs": 3, 00:16:28.502 "num_base_bdevs_discovered": 3, 00:16:28.502 "num_base_bdevs_operational": 3, 00:16:28.502 "process": { 00:16:28.502 "type": "rebuild", 00:16:28.502 "target": "spare", 00:16:28.502 "progress": { 00:16:28.502 "blocks": 22528, 00:16:28.502 "percent": 17 00:16:28.502 } 00:16:28.502 }, 00:16:28.502 "base_bdevs_list": [ 00:16:28.502 { 00:16:28.502 "name": "spare", 00:16:28.502 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:28.502 "is_configured": true, 00:16:28.502 "data_offset": 0, 00:16:28.502 "data_size": 65536 00:16:28.502 }, 00:16:28.502 { 00:16:28.502 "name": "BaseBdev2", 00:16:28.502 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:28.502 "is_configured": true, 00:16:28.502 "data_offset": 0, 00:16:28.503 "data_size": 65536 00:16:28.503 }, 00:16:28.503 { 00:16:28.503 "name": "BaseBdev3", 00:16:28.503 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:28.503 "is_configured": true, 00:16:28.503 "data_offset": 0, 00:16:28.503 "data_size": 65536 00:16:28.503 } 00:16:28.503 ] 00:16:28.503 }' 00:16:28.503 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.503 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.503 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.503 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.503 10:10:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.438 10:10:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.697 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.697 "name": "raid_bdev1", 00:16:29.697 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:29.697 "strip_size_kb": 64, 00:16:29.697 "state": "online", 00:16:29.697 "raid_level": "raid5f", 00:16:29.697 "superblock": false, 00:16:29.697 "num_base_bdevs": 3, 00:16:29.697 "num_base_bdevs_discovered": 3, 00:16:29.697 "num_base_bdevs_operational": 3, 00:16:29.697 "process": { 00:16:29.697 "type": "rebuild", 00:16:29.697 "target": "spare", 00:16:29.697 "progress": { 00:16:29.697 "blocks": 45056, 00:16:29.697 "percent": 34 00:16:29.697 } 00:16:29.697 }, 00:16:29.697 "base_bdevs_list": [ 00:16:29.697 { 00:16:29.697 "name": "spare", 00:16:29.697 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:29.697 "is_configured": true, 00:16:29.697 "data_offset": 0, 00:16:29.697 "data_size": 65536 00:16:29.697 }, 00:16:29.697 { 00:16:29.697 "name": "BaseBdev2", 00:16:29.697 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:29.697 "is_configured": true, 00:16:29.697 "data_offset": 0, 00:16:29.697 "data_size": 65536 00:16:29.697 }, 00:16:29.697 { 00:16:29.697 "name": "BaseBdev3", 00:16:29.697 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:29.697 "is_configured": true, 00:16:29.697 "data_offset": 0, 00:16:29.697 "data_size": 65536 00:16:29.697 } 00:16:29.697 ] 00:16:29.697 }' 00:16:29.697 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.697 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.697 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.697 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.697 10:10:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.630 "name": "raid_bdev1", 00:16:30.630 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:30.630 "strip_size_kb": 64, 00:16:30.630 "state": "online", 00:16:30.630 "raid_level": "raid5f", 00:16:30.630 "superblock": false, 00:16:30.630 "num_base_bdevs": 3, 00:16:30.630 "num_base_bdevs_discovered": 3, 00:16:30.630 "num_base_bdevs_operational": 3, 00:16:30.630 "process": { 00:16:30.630 "type": "rebuild", 00:16:30.630 "target": "spare", 00:16:30.630 "progress": { 00:16:30.630 "blocks": 69632, 00:16:30.630 "percent": 53 00:16:30.630 } 00:16:30.630 }, 00:16:30.630 "base_bdevs_list": [ 00:16:30.630 { 00:16:30.630 "name": "spare", 00:16:30.630 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:30.630 "is_configured": true, 00:16:30.630 "data_offset": 0, 00:16:30.630 "data_size": 65536 00:16:30.630 }, 00:16:30.630 { 00:16:30.630 "name": "BaseBdev2", 00:16:30.630 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:30.630 "is_configured": true, 00:16:30.630 "data_offset": 0, 00:16:30.630 "data_size": 65536 00:16:30.630 }, 00:16:30.630 { 00:16:30.630 "name": "BaseBdev3", 00:16:30.630 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:30.630 "is_configured": true, 00:16:30.630 "data_offset": 0, 00:16:30.630 "data_size": 65536 00:16:30.630 } 00:16:30.630 ] 00:16:30.630 }' 00:16:30.630 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.888 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.888 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.888 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.888 10:10:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.822 10:10:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.822 10:10:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.822 "name": "raid_bdev1", 00:16:31.822 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:31.822 "strip_size_kb": 64, 00:16:31.822 "state": "online", 00:16:31.822 "raid_level": "raid5f", 00:16:31.822 "superblock": false, 00:16:31.822 "num_base_bdevs": 3, 00:16:31.822 "num_base_bdevs_discovered": 3, 00:16:31.822 "num_base_bdevs_operational": 3, 00:16:31.822 "process": { 00:16:31.822 "type": "rebuild", 00:16:31.822 "target": "spare", 00:16:31.822 "progress": { 00:16:31.822 "blocks": 92160, 00:16:31.822 "percent": 70 00:16:31.822 } 00:16:31.822 }, 00:16:31.822 "base_bdevs_list": [ 00:16:31.822 { 00:16:31.822 "name": "spare", 00:16:31.822 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:31.822 "is_configured": true, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 65536 00:16:31.822 }, 00:16:31.822 { 00:16:31.822 "name": "BaseBdev2", 00:16:31.822 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:31.822 "is_configured": true, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 65536 00:16:31.822 }, 00:16:31.822 { 00:16:31.822 "name": "BaseBdev3", 00:16:31.822 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:31.822 "is_configured": true, 00:16:31.822 "data_offset": 0, 00:16:31.822 "data_size": 65536 00:16:31.822 } 00:16:31.822 ] 00:16:31.822 }' 00:16:31.822 10:10:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.080 10:10:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.080 10:10:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.080 10:10:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.080 10:10:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.015 "name": "raid_bdev1", 00:16:33.015 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:33.015 "strip_size_kb": 64, 00:16:33.015 "state": "online", 00:16:33.015 "raid_level": "raid5f", 00:16:33.015 "superblock": false, 00:16:33.015 "num_base_bdevs": 3, 00:16:33.015 "num_base_bdevs_discovered": 3, 00:16:33.015 "num_base_bdevs_operational": 3, 00:16:33.015 "process": { 00:16:33.015 "type": "rebuild", 00:16:33.015 "target": "spare", 00:16:33.015 "progress": { 00:16:33.015 "blocks": 116736, 00:16:33.015 "percent": 89 00:16:33.015 } 00:16:33.015 }, 00:16:33.015 "base_bdevs_list": [ 00:16:33.015 { 00:16:33.015 "name": "spare", 00:16:33.015 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:33.015 "is_configured": true, 00:16:33.015 "data_offset": 0, 00:16:33.015 "data_size": 65536 00:16:33.015 }, 00:16:33.015 { 00:16:33.015 "name": "BaseBdev2", 00:16:33.015 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:33.015 "is_configured": true, 00:16:33.015 "data_offset": 0, 00:16:33.015 "data_size": 65536 00:16:33.015 }, 00:16:33.015 { 00:16:33.015 "name": "BaseBdev3", 00:16:33.015 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:33.015 "is_configured": true, 00:16:33.015 "data_offset": 0, 00:16:33.015 "data_size": 65536 00:16:33.015 } 00:16:33.015 ] 00:16:33.015 }' 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.015 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.273 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.273 10:10:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.871 [2024-11-19 10:10:47.819309] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:33.871 [2024-11-19 10:10:47.819486] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:33.871 [2024-11-19 10:10:47.819568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.130 "name": "raid_bdev1", 00:16:34.130 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:34.130 "strip_size_kb": 64, 00:16:34.130 "state": "online", 00:16:34.130 "raid_level": "raid5f", 00:16:34.130 "superblock": false, 00:16:34.130 "num_base_bdevs": 3, 00:16:34.130 "num_base_bdevs_discovered": 3, 00:16:34.130 "num_base_bdevs_operational": 3, 00:16:34.130 "base_bdevs_list": [ 00:16:34.130 { 00:16:34.130 "name": "spare", 00:16:34.130 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:34.130 "is_configured": true, 00:16:34.130 "data_offset": 0, 00:16:34.130 "data_size": 65536 00:16:34.130 }, 00:16:34.130 { 00:16:34.130 "name": "BaseBdev2", 00:16:34.130 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:34.130 "is_configured": true, 00:16:34.130 "data_offset": 0, 00:16:34.130 "data_size": 65536 00:16:34.130 }, 00:16:34.130 { 00:16:34.130 "name": "BaseBdev3", 00:16:34.130 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:34.130 "is_configured": true, 00:16:34.130 "data_offset": 0, 00:16:34.130 "data_size": 65536 00:16:34.130 } 00:16:34.130 ] 00:16:34.130 }' 00:16:34.130 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.388 "name": "raid_bdev1", 00:16:34.388 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:34.388 "strip_size_kb": 64, 00:16:34.388 "state": "online", 00:16:34.388 "raid_level": "raid5f", 00:16:34.388 "superblock": false, 00:16:34.388 "num_base_bdevs": 3, 00:16:34.388 "num_base_bdevs_discovered": 3, 00:16:34.388 "num_base_bdevs_operational": 3, 00:16:34.388 "base_bdevs_list": [ 00:16:34.388 { 00:16:34.388 "name": "spare", 00:16:34.388 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:34.388 "is_configured": true, 00:16:34.388 "data_offset": 0, 00:16:34.388 "data_size": 65536 00:16:34.388 }, 00:16:34.388 { 00:16:34.388 "name": "BaseBdev2", 00:16:34.388 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:34.388 "is_configured": true, 00:16:34.388 "data_offset": 0, 00:16:34.388 "data_size": 65536 00:16:34.388 }, 00:16:34.388 { 00:16:34.388 "name": "BaseBdev3", 00:16:34.388 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:34.388 "is_configured": true, 00:16:34.388 "data_offset": 0, 00:16:34.388 "data_size": 65536 00:16:34.388 } 00:16:34.388 ] 00:16:34.388 }' 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.388 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.646 "name": "raid_bdev1", 00:16:34.646 "uuid": "b6ba2f89-ad69-4aeb-81bc-8d9c49e7fb62", 00:16:34.646 "strip_size_kb": 64, 00:16:34.646 "state": "online", 00:16:34.646 "raid_level": "raid5f", 00:16:34.646 "superblock": false, 00:16:34.646 "num_base_bdevs": 3, 00:16:34.646 "num_base_bdevs_discovered": 3, 00:16:34.646 "num_base_bdevs_operational": 3, 00:16:34.646 "base_bdevs_list": [ 00:16:34.646 { 00:16:34.646 "name": "spare", 00:16:34.646 "uuid": "c6994a34-37ee-529a-a6c3-7dfdf169afa7", 00:16:34.646 "is_configured": true, 00:16:34.646 "data_offset": 0, 00:16:34.646 "data_size": 65536 00:16:34.646 }, 00:16:34.646 { 00:16:34.646 "name": "BaseBdev2", 00:16:34.646 "uuid": "eb8a353d-e0de-59d3-97e3-44634e514408", 00:16:34.646 "is_configured": true, 00:16:34.646 "data_offset": 0, 00:16:34.646 "data_size": 65536 00:16:34.646 }, 00:16:34.646 { 00:16:34.646 "name": "BaseBdev3", 00:16:34.646 "uuid": "27c4538e-4490-5de9-ae50-681d9a020884", 00:16:34.646 "is_configured": true, 00:16:34.646 "data_offset": 0, 00:16:34.646 "data_size": 65536 00:16:34.646 } 00:16:34.646 ] 00:16:34.646 }' 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.646 10:10:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.903 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.903 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 [2024-11-19 10:10:49.138568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.164 [2024-11-19 10:10:49.138612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.164 [2024-11-19 10:10:49.138765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.164 [2024-11-19 10:10:49.138914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.164 [2024-11-19 10:10:49.138950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:35.164 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.165 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:35.423 /dev/nbd0 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.423 1+0 records in 00:16:35.423 1+0 records out 00:16:35.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00241389 s, 1.7 MB/s 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.423 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:35.989 /dev/nbd1 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.990 1+0 records in 00:16:35.990 1+0 records out 00:16:35.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486189 s, 8.4 MB/s 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.990 10:10:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.990 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:36.248 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81927 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81927 ']' 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81927 00:16:36.507 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81927 00:16:36.765 killing process with pid 81927 00:16:36.765 Received shutdown signal, test time was about 60.000000 seconds 00:16:36.765 00:16:36.765 Latency(us) 00:16:36.765 [2024-11-19T10:10:50.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.765 [2024-11-19T10:10:50.997Z] =================================================================================================================== 00:16:36.765 [2024-11-19T10:10:50.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81927' 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81927 00:16:36.765 [2024-11-19 10:10:50.767684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.765 10:10:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81927 00:16:37.024 [2024-11-19 10:10:51.159642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:38.398 00:16:38.398 real 0m16.826s 00:16:38.398 user 0m21.520s 00:16:38.398 sys 0m2.170s 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.398 ************************************ 00:16:38.398 END TEST raid5f_rebuild_test 00:16:38.398 ************************************ 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.398 10:10:52 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:38.398 10:10:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:38.398 10:10:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.398 10:10:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.398 ************************************ 00:16:38.398 START TEST raid5f_rebuild_test_sb 00:16:38.398 ************************************ 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82383 00:16:38.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82383 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82383 ']' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.398 10:10:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:38.398 Zero copy mechanism will not be used. 00:16:38.398 [2024-11-19 10:10:52.446444] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:16:38.398 [2024-11-19 10:10:52.446630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82383 ] 00:16:38.398 [2024-11-19 10:10:52.626295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.657 [2024-11-19 10:10:52.781181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.915 [2024-11-19 10:10:53.010753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.915 [2024-11-19 10:10:53.010871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 BaseBdev1_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 [2024-11-19 10:10:53.520470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.485 [2024-11-19 10:10:53.520582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.485 [2024-11-19 10:10:53.520621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:39.485 [2024-11-19 10:10:53.520642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.485 [2024-11-19 10:10:53.523980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.485 [2024-11-19 10:10:53.524041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.485 BaseBdev1 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 BaseBdev2_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 [2024-11-19 10:10:53.576954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:39.485 [2024-11-19 10:10:53.577063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.485 [2024-11-19 10:10:53.577100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:39.485 [2024-11-19 10:10:53.577124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.485 [2024-11-19 10:10:53.580302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.485 [2024-11-19 10:10:53.580367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:39.485 BaseBdev2 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 BaseBdev3_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 [2024-11-19 10:10:53.653851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:39.485 [2024-11-19 10:10:53.654218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.485 [2024-11-19 10:10:53.654286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:39.485 [2024-11-19 10:10:53.654318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.485 [2024-11-19 10:10:53.658825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.485 [2024-11-19 10:10:53.658918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:39.485 BaseBdev3 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.743 spare_malloc 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.743 spare_delay 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.743 [2024-11-19 10:10:53.739369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:39.743 [2024-11-19 10:10:53.739488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.743 [2024-11-19 10:10:53.739532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:39.743 [2024-11-19 10:10:53.739554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.743 [2024-11-19 10:10:53.742950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.743 [2024-11-19 10:10:53.743017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:39.743 spare 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.743 [2024-11-19 10:10:53.751549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.743 [2024-11-19 10:10:53.754565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.743 [2024-11-19 10:10:53.754868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.743 [2024-11-19 10:10:53.755298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:39.743 [2024-11-19 10:10:53.755429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:39.743 [2024-11-19 10:10:53.755884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:39.743 [2024-11-19 10:10:53.761194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:39.743 [2024-11-19 10:10:53.761387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:39.743 [2024-11-19 10:10:53.761833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.743 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.743 "name": "raid_bdev1", 00:16:39.743 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:39.743 "strip_size_kb": 64, 00:16:39.743 "state": "online", 00:16:39.743 "raid_level": "raid5f", 00:16:39.743 "superblock": true, 00:16:39.743 "num_base_bdevs": 3, 00:16:39.743 "num_base_bdevs_discovered": 3, 00:16:39.743 "num_base_bdevs_operational": 3, 00:16:39.743 "base_bdevs_list": [ 00:16:39.743 { 00:16:39.743 "name": "BaseBdev1", 00:16:39.743 "uuid": "bc6fa3fd-6b94-57db-b9a0-eaf58a56b17d", 00:16:39.743 "is_configured": true, 00:16:39.743 "data_offset": 2048, 00:16:39.743 "data_size": 63488 00:16:39.743 }, 00:16:39.743 { 00:16:39.743 "name": "BaseBdev2", 00:16:39.743 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:39.743 "is_configured": true, 00:16:39.743 "data_offset": 2048, 00:16:39.743 "data_size": 63488 00:16:39.743 }, 00:16:39.743 { 00:16:39.744 "name": "BaseBdev3", 00:16:39.744 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:39.744 "is_configured": true, 00:16:39.744 "data_offset": 2048, 00:16:39.744 "data_size": 63488 00:16:39.744 } 00:16:39.744 ] 00:16:39.744 }' 00:16:39.744 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.744 10:10:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.374 [2024-11-19 10:10:54.340955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.374 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:40.633 [2024-11-19 10:10:54.748889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:40.633 /dev/nbd0 00:16:40.633 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.634 1+0 records in 00:16:40.634 1+0 records out 00:16:40.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586143 s, 7.0 MB/s 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:40.634 10:10:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:41.201 496+0 records in 00:16:41.201 496+0 records out 00:16:41.201 65011712 bytes (65 MB, 62 MiB) copied, 0.453472 s, 143 MB/s 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.201 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:41.460 [2024-11-19 10:10:55.567500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.460 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.461 [2024-11-19 10:10:55.585865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.461 "name": "raid_bdev1", 00:16:41.461 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:41.461 "strip_size_kb": 64, 00:16:41.461 "state": "online", 00:16:41.461 "raid_level": "raid5f", 00:16:41.461 "superblock": true, 00:16:41.461 "num_base_bdevs": 3, 00:16:41.461 "num_base_bdevs_discovered": 2, 00:16:41.461 "num_base_bdevs_operational": 2, 00:16:41.461 "base_bdevs_list": [ 00:16:41.461 { 00:16:41.461 "name": null, 00:16:41.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.461 "is_configured": false, 00:16:41.461 "data_offset": 0, 00:16:41.461 "data_size": 63488 00:16:41.461 }, 00:16:41.461 { 00:16:41.461 "name": "BaseBdev2", 00:16:41.461 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:41.461 "is_configured": true, 00:16:41.461 "data_offset": 2048, 00:16:41.461 "data_size": 63488 00:16:41.461 }, 00:16:41.461 { 00:16:41.461 "name": "BaseBdev3", 00:16:41.461 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:41.461 "is_configured": true, 00:16:41.461 "data_offset": 2048, 00:16:41.461 "data_size": 63488 00:16:41.461 } 00:16:41.461 ] 00:16:41.461 }' 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.461 10:10:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.027 10:10:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.027 10:10:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.027 10:10:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.027 [2024-11-19 10:10:56.097980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.027 [2024-11-19 10:10:56.114377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:42.027 10:10:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.027 10:10:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:42.027 [2024-11-19 10:10:56.122499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.993 "name": "raid_bdev1", 00:16:42.993 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:42.993 "strip_size_kb": 64, 00:16:42.993 "state": "online", 00:16:42.993 "raid_level": "raid5f", 00:16:42.993 "superblock": true, 00:16:42.993 "num_base_bdevs": 3, 00:16:42.993 "num_base_bdevs_discovered": 3, 00:16:42.993 "num_base_bdevs_operational": 3, 00:16:42.993 "process": { 00:16:42.993 "type": "rebuild", 00:16:42.993 "target": "spare", 00:16:42.993 "progress": { 00:16:42.993 "blocks": 18432, 00:16:42.993 "percent": 14 00:16:42.993 } 00:16:42.993 }, 00:16:42.993 "base_bdevs_list": [ 00:16:42.993 { 00:16:42.993 "name": "spare", 00:16:42.993 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:42.993 "is_configured": true, 00:16:42.993 "data_offset": 2048, 00:16:42.993 "data_size": 63488 00:16:42.993 }, 00:16:42.993 { 00:16:42.993 "name": "BaseBdev2", 00:16:42.993 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:42.993 "is_configured": true, 00:16:42.993 "data_offset": 2048, 00:16:42.993 "data_size": 63488 00:16:42.993 }, 00:16:42.993 { 00:16:42.993 "name": "BaseBdev3", 00:16:42.993 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:42.993 "is_configured": true, 00:16:42.993 "data_offset": 2048, 00:16:42.993 "data_size": 63488 00:16:42.993 } 00:16:42.993 ] 00:16:42.993 }' 00:16:42.993 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.252 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.253 [2024-11-19 10:10:57.288563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.253 [2024-11-19 10:10:57.341002] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.253 [2024-11-19 10:10:57.341111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.253 [2024-11-19 10:10:57.341144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.253 [2024-11-19 10:10:57.341157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.253 "name": "raid_bdev1", 00:16:43.253 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:43.253 "strip_size_kb": 64, 00:16:43.253 "state": "online", 00:16:43.253 "raid_level": "raid5f", 00:16:43.253 "superblock": true, 00:16:43.253 "num_base_bdevs": 3, 00:16:43.253 "num_base_bdevs_discovered": 2, 00:16:43.253 "num_base_bdevs_operational": 2, 00:16:43.253 "base_bdevs_list": [ 00:16:43.253 { 00:16:43.253 "name": null, 00:16:43.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.253 "is_configured": false, 00:16:43.253 "data_offset": 0, 00:16:43.253 "data_size": 63488 00:16:43.253 }, 00:16:43.253 { 00:16:43.253 "name": "BaseBdev2", 00:16:43.253 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:43.253 "is_configured": true, 00:16:43.253 "data_offset": 2048, 00:16:43.253 "data_size": 63488 00:16:43.253 }, 00:16:43.253 { 00:16:43.253 "name": "BaseBdev3", 00:16:43.253 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:43.253 "is_configured": true, 00:16:43.253 "data_offset": 2048, 00:16:43.253 "data_size": 63488 00:16:43.253 } 00:16:43.253 ] 00:16:43.253 }' 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.253 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.860 "name": "raid_bdev1", 00:16:43.860 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:43.860 "strip_size_kb": 64, 00:16:43.860 "state": "online", 00:16:43.860 "raid_level": "raid5f", 00:16:43.860 "superblock": true, 00:16:43.860 "num_base_bdevs": 3, 00:16:43.860 "num_base_bdevs_discovered": 2, 00:16:43.860 "num_base_bdevs_operational": 2, 00:16:43.860 "base_bdevs_list": [ 00:16:43.860 { 00:16:43.860 "name": null, 00:16:43.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.860 "is_configured": false, 00:16:43.860 "data_offset": 0, 00:16:43.860 "data_size": 63488 00:16:43.860 }, 00:16:43.860 { 00:16:43.860 "name": "BaseBdev2", 00:16:43.860 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:43.860 "is_configured": true, 00:16:43.860 "data_offset": 2048, 00:16:43.860 "data_size": 63488 00:16:43.860 }, 00:16:43.860 { 00:16:43.860 "name": "BaseBdev3", 00:16:43.860 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:43.860 "is_configured": true, 00:16:43.860 "data_offset": 2048, 00:16:43.860 "data_size": 63488 00:16:43.860 } 00:16:43.860 ] 00:16:43.860 }' 00:16:43.860 10:10:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.860 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.860 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.860 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.860 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.860 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.860 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.136 [2024-11-19 10:10:58.071009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.136 [2024-11-19 10:10:58.086634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:44.137 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.137 10:10:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:44.137 [2024-11-19 10:10:58.094386] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.073 "name": "raid_bdev1", 00:16:45.073 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:45.073 "strip_size_kb": 64, 00:16:45.073 "state": "online", 00:16:45.073 "raid_level": "raid5f", 00:16:45.073 "superblock": true, 00:16:45.073 "num_base_bdevs": 3, 00:16:45.073 "num_base_bdevs_discovered": 3, 00:16:45.073 "num_base_bdevs_operational": 3, 00:16:45.073 "process": { 00:16:45.073 "type": "rebuild", 00:16:45.073 "target": "spare", 00:16:45.073 "progress": { 00:16:45.073 "blocks": 18432, 00:16:45.073 "percent": 14 00:16:45.073 } 00:16:45.073 }, 00:16:45.073 "base_bdevs_list": [ 00:16:45.073 { 00:16:45.073 "name": "spare", 00:16:45.073 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:45.073 "is_configured": true, 00:16:45.073 "data_offset": 2048, 00:16:45.073 "data_size": 63488 00:16:45.073 }, 00:16:45.073 { 00:16:45.073 "name": "BaseBdev2", 00:16:45.073 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:45.073 "is_configured": true, 00:16:45.073 "data_offset": 2048, 00:16:45.073 "data_size": 63488 00:16:45.073 }, 00:16:45.073 { 00:16:45.073 "name": "BaseBdev3", 00:16:45.073 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:45.073 "is_configured": true, 00:16:45.073 "data_offset": 2048, 00:16:45.073 "data_size": 63488 00:16:45.073 } 00:16:45.073 ] 00:16:45.073 }' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:45.073 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=628 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.073 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.074 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.333 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.333 "name": "raid_bdev1", 00:16:45.333 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:45.333 "strip_size_kb": 64, 00:16:45.333 "state": "online", 00:16:45.333 "raid_level": "raid5f", 00:16:45.333 "superblock": true, 00:16:45.333 "num_base_bdevs": 3, 00:16:45.333 "num_base_bdevs_discovered": 3, 00:16:45.333 "num_base_bdevs_operational": 3, 00:16:45.333 "process": { 00:16:45.333 "type": "rebuild", 00:16:45.333 "target": "spare", 00:16:45.333 "progress": { 00:16:45.333 "blocks": 22528, 00:16:45.333 "percent": 17 00:16:45.333 } 00:16:45.333 }, 00:16:45.333 "base_bdevs_list": [ 00:16:45.333 { 00:16:45.333 "name": "spare", 00:16:45.333 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:45.333 "is_configured": true, 00:16:45.333 "data_offset": 2048, 00:16:45.333 "data_size": 63488 00:16:45.333 }, 00:16:45.333 { 00:16:45.333 "name": "BaseBdev2", 00:16:45.333 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:45.333 "is_configured": true, 00:16:45.333 "data_offset": 2048, 00:16:45.333 "data_size": 63488 00:16:45.333 }, 00:16:45.333 { 00:16:45.333 "name": "BaseBdev3", 00:16:45.333 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:45.333 "is_configured": true, 00:16:45.333 "data_offset": 2048, 00:16:45.333 "data_size": 63488 00:16:45.333 } 00:16:45.333 ] 00:16:45.333 }' 00:16:45.333 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.333 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.333 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.333 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.333 10:10:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.267 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.267 "name": "raid_bdev1", 00:16:46.267 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:46.267 "strip_size_kb": 64, 00:16:46.267 "state": "online", 00:16:46.267 "raid_level": "raid5f", 00:16:46.267 "superblock": true, 00:16:46.267 "num_base_bdevs": 3, 00:16:46.267 "num_base_bdevs_discovered": 3, 00:16:46.267 "num_base_bdevs_operational": 3, 00:16:46.267 "process": { 00:16:46.267 "type": "rebuild", 00:16:46.267 "target": "spare", 00:16:46.267 "progress": { 00:16:46.267 "blocks": 47104, 00:16:46.267 "percent": 37 00:16:46.267 } 00:16:46.267 }, 00:16:46.267 "base_bdevs_list": [ 00:16:46.267 { 00:16:46.268 "name": "spare", 00:16:46.268 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:46.268 "is_configured": true, 00:16:46.268 "data_offset": 2048, 00:16:46.268 "data_size": 63488 00:16:46.268 }, 00:16:46.268 { 00:16:46.268 "name": "BaseBdev2", 00:16:46.268 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:46.268 "is_configured": true, 00:16:46.268 "data_offset": 2048, 00:16:46.268 "data_size": 63488 00:16:46.268 }, 00:16:46.268 { 00:16:46.268 "name": "BaseBdev3", 00:16:46.268 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:46.268 "is_configured": true, 00:16:46.268 "data_offset": 2048, 00:16:46.268 "data_size": 63488 00:16:46.268 } 00:16:46.268 ] 00:16:46.268 }' 00:16:46.268 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.526 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.526 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.526 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.526 10:11:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.482 "name": "raid_bdev1", 00:16:47.482 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:47.482 "strip_size_kb": 64, 00:16:47.482 "state": "online", 00:16:47.482 "raid_level": "raid5f", 00:16:47.482 "superblock": true, 00:16:47.482 "num_base_bdevs": 3, 00:16:47.482 "num_base_bdevs_discovered": 3, 00:16:47.482 "num_base_bdevs_operational": 3, 00:16:47.482 "process": { 00:16:47.482 "type": "rebuild", 00:16:47.482 "target": "spare", 00:16:47.482 "progress": { 00:16:47.482 "blocks": 69632, 00:16:47.482 "percent": 54 00:16:47.482 } 00:16:47.482 }, 00:16:47.482 "base_bdevs_list": [ 00:16:47.482 { 00:16:47.482 "name": "spare", 00:16:47.482 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:47.482 "is_configured": true, 00:16:47.482 "data_offset": 2048, 00:16:47.482 "data_size": 63488 00:16:47.482 }, 00:16:47.482 { 00:16:47.482 "name": "BaseBdev2", 00:16:47.482 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:47.482 "is_configured": true, 00:16:47.482 "data_offset": 2048, 00:16:47.482 "data_size": 63488 00:16:47.482 }, 00:16:47.482 { 00:16:47.482 "name": "BaseBdev3", 00:16:47.482 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:47.482 "is_configured": true, 00:16:47.482 "data_offset": 2048, 00:16:47.482 "data_size": 63488 00:16:47.482 } 00:16:47.482 ] 00:16:47.482 }' 00:16:47.482 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.760 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.760 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.760 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.760 10:11:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.695 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.696 "name": "raid_bdev1", 00:16:48.696 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:48.696 "strip_size_kb": 64, 00:16:48.696 "state": "online", 00:16:48.696 "raid_level": "raid5f", 00:16:48.696 "superblock": true, 00:16:48.696 "num_base_bdevs": 3, 00:16:48.696 "num_base_bdevs_discovered": 3, 00:16:48.696 "num_base_bdevs_operational": 3, 00:16:48.696 "process": { 00:16:48.696 "type": "rebuild", 00:16:48.696 "target": "spare", 00:16:48.696 "progress": { 00:16:48.696 "blocks": 92160, 00:16:48.696 "percent": 72 00:16:48.696 } 00:16:48.696 }, 00:16:48.696 "base_bdevs_list": [ 00:16:48.696 { 00:16:48.696 "name": "spare", 00:16:48.696 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:48.696 "is_configured": true, 00:16:48.696 "data_offset": 2048, 00:16:48.696 "data_size": 63488 00:16:48.696 }, 00:16:48.696 { 00:16:48.696 "name": "BaseBdev2", 00:16:48.696 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:48.696 "is_configured": true, 00:16:48.696 "data_offset": 2048, 00:16:48.696 "data_size": 63488 00:16:48.696 }, 00:16:48.696 { 00:16:48.696 "name": "BaseBdev3", 00:16:48.696 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:48.696 "is_configured": true, 00:16:48.696 "data_offset": 2048, 00:16:48.696 "data_size": 63488 00:16:48.696 } 00:16:48.696 ] 00:16:48.696 }' 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.696 10:11:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.072 "name": "raid_bdev1", 00:16:50.072 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:50.072 "strip_size_kb": 64, 00:16:50.072 "state": "online", 00:16:50.072 "raid_level": "raid5f", 00:16:50.072 "superblock": true, 00:16:50.072 "num_base_bdevs": 3, 00:16:50.072 "num_base_bdevs_discovered": 3, 00:16:50.072 "num_base_bdevs_operational": 3, 00:16:50.072 "process": { 00:16:50.072 "type": "rebuild", 00:16:50.072 "target": "spare", 00:16:50.072 "progress": { 00:16:50.072 "blocks": 116736, 00:16:50.072 "percent": 91 00:16:50.072 } 00:16:50.072 }, 00:16:50.072 "base_bdevs_list": [ 00:16:50.072 { 00:16:50.072 "name": "spare", 00:16:50.072 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:50.072 "is_configured": true, 00:16:50.072 "data_offset": 2048, 00:16:50.072 "data_size": 63488 00:16:50.072 }, 00:16:50.072 { 00:16:50.072 "name": "BaseBdev2", 00:16:50.072 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:50.072 "is_configured": true, 00:16:50.072 "data_offset": 2048, 00:16:50.072 "data_size": 63488 00:16:50.072 }, 00:16:50.072 { 00:16:50.072 "name": "BaseBdev3", 00:16:50.072 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:50.072 "is_configured": true, 00:16:50.072 "data_offset": 2048, 00:16:50.072 "data_size": 63488 00:16:50.072 } 00:16:50.072 ] 00:16:50.072 }' 00:16:50.072 10:11:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.072 10:11:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.072 10:11:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.072 10:11:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.072 10:11:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.331 [2024-11-19 10:11:04.387643] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:50.331 [2024-11-19 10:11:04.387839] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:50.331 [2024-11-19 10:11:04.388045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.898 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.158 "name": "raid_bdev1", 00:16:51.158 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:51.158 "strip_size_kb": 64, 00:16:51.158 "state": "online", 00:16:51.158 "raid_level": "raid5f", 00:16:51.158 "superblock": true, 00:16:51.158 "num_base_bdevs": 3, 00:16:51.158 "num_base_bdevs_discovered": 3, 00:16:51.158 "num_base_bdevs_operational": 3, 00:16:51.158 "base_bdevs_list": [ 00:16:51.158 { 00:16:51.158 "name": "spare", 00:16:51.158 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:51.158 "is_configured": true, 00:16:51.158 "data_offset": 2048, 00:16:51.158 "data_size": 63488 00:16:51.158 }, 00:16:51.158 { 00:16:51.158 "name": "BaseBdev2", 00:16:51.158 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:51.158 "is_configured": true, 00:16:51.158 "data_offset": 2048, 00:16:51.158 "data_size": 63488 00:16:51.158 }, 00:16:51.158 { 00:16:51.158 "name": "BaseBdev3", 00:16:51.158 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:51.158 "is_configured": true, 00:16:51.158 "data_offset": 2048, 00:16:51.158 "data_size": 63488 00:16:51.158 } 00:16:51.158 ] 00:16:51.158 }' 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.158 "name": "raid_bdev1", 00:16:51.158 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:51.158 "strip_size_kb": 64, 00:16:51.158 "state": "online", 00:16:51.158 "raid_level": "raid5f", 00:16:51.158 "superblock": true, 00:16:51.158 "num_base_bdevs": 3, 00:16:51.158 "num_base_bdevs_discovered": 3, 00:16:51.158 "num_base_bdevs_operational": 3, 00:16:51.158 "base_bdevs_list": [ 00:16:51.158 { 00:16:51.158 "name": "spare", 00:16:51.158 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:51.158 "is_configured": true, 00:16:51.158 "data_offset": 2048, 00:16:51.158 "data_size": 63488 00:16:51.158 }, 00:16:51.158 { 00:16:51.158 "name": "BaseBdev2", 00:16:51.158 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:51.158 "is_configured": true, 00:16:51.158 "data_offset": 2048, 00:16:51.158 "data_size": 63488 00:16:51.158 }, 00:16:51.158 { 00:16:51.158 "name": "BaseBdev3", 00:16:51.158 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:51.158 "is_configured": true, 00:16:51.158 "data_offset": 2048, 00:16:51.158 "data_size": 63488 00:16:51.158 } 00:16:51.158 ] 00:16:51.158 }' 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.158 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.418 "name": "raid_bdev1", 00:16:51.418 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:51.418 "strip_size_kb": 64, 00:16:51.418 "state": "online", 00:16:51.418 "raid_level": "raid5f", 00:16:51.418 "superblock": true, 00:16:51.418 "num_base_bdevs": 3, 00:16:51.418 "num_base_bdevs_discovered": 3, 00:16:51.418 "num_base_bdevs_operational": 3, 00:16:51.418 "base_bdevs_list": [ 00:16:51.418 { 00:16:51.418 "name": "spare", 00:16:51.418 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:51.418 "is_configured": true, 00:16:51.418 "data_offset": 2048, 00:16:51.418 "data_size": 63488 00:16:51.418 }, 00:16:51.418 { 00:16:51.418 "name": "BaseBdev2", 00:16:51.418 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:51.418 "is_configured": true, 00:16:51.418 "data_offset": 2048, 00:16:51.418 "data_size": 63488 00:16:51.418 }, 00:16:51.418 { 00:16:51.418 "name": "BaseBdev3", 00:16:51.418 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:51.418 "is_configured": true, 00:16:51.418 "data_offset": 2048, 00:16:51.418 "data_size": 63488 00:16:51.418 } 00:16:51.418 ] 00:16:51.418 }' 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.418 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 [2024-11-19 10:11:05.954310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.986 [2024-11-19 10:11:05.954361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.986 [2024-11-19 10:11:05.954493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.986 [2024-11-19 10:11:05.954616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.986 [2024-11-19 10:11:05.954645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:51.986 10:11:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.986 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:52.245 /dev/nbd0 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.245 1+0 records in 00:16:52.245 1+0 records out 00:16:52.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395416 s, 10.4 MB/s 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.245 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:52.504 /dev/nbd1 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.504 1+0 records in 00:16:52.504 1+0 records out 00:16:52.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448431 s, 9.1 MB/s 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.504 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.763 10:11:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:53.021 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:53.021 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.022 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.589 [2024-11-19 10:11:07.575526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.589 [2024-11-19 10:11:07.575641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.589 [2024-11-19 10:11:07.575686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:53.589 [2024-11-19 10:11:07.575705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.589 [2024-11-19 10:11:07.579229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.589 [2024-11-19 10:11:07.579303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.589 [2024-11-19 10:11:07.579456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:53.589 [2024-11-19 10:11:07.579544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.589 [2024-11-19 10:11:07.579854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.589 [2024-11-19 10:11:07.580025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.589 spare 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.589 [2024-11-19 10:11:07.680187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:53.589 [2024-11-19 10:11:07.680281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:53.589 [2024-11-19 10:11:07.680798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:53.589 [2024-11-19 10:11:07.685950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:53.589 [2024-11-19 10:11:07.685990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:53.589 [2024-11-19 10:11:07.686324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.589 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.590 "name": "raid_bdev1", 00:16:53.590 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:53.590 "strip_size_kb": 64, 00:16:53.590 "state": "online", 00:16:53.590 "raid_level": "raid5f", 00:16:53.590 "superblock": true, 00:16:53.590 "num_base_bdevs": 3, 00:16:53.590 "num_base_bdevs_discovered": 3, 00:16:53.590 "num_base_bdevs_operational": 3, 00:16:53.590 "base_bdevs_list": [ 00:16:53.590 { 00:16:53.590 "name": "spare", 00:16:53.590 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:53.590 "is_configured": true, 00:16:53.590 "data_offset": 2048, 00:16:53.590 "data_size": 63488 00:16:53.590 }, 00:16:53.590 { 00:16:53.590 "name": "BaseBdev2", 00:16:53.590 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:53.590 "is_configured": true, 00:16:53.590 "data_offset": 2048, 00:16:53.590 "data_size": 63488 00:16:53.590 }, 00:16:53.590 { 00:16:53.590 "name": "BaseBdev3", 00:16:53.590 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:53.590 "is_configured": true, 00:16:53.590 "data_offset": 2048, 00:16:53.590 "data_size": 63488 00:16:53.590 } 00:16:53.590 ] 00:16:53.590 }' 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.590 10:11:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.155 "name": "raid_bdev1", 00:16:54.155 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:54.155 "strip_size_kb": 64, 00:16:54.155 "state": "online", 00:16:54.155 "raid_level": "raid5f", 00:16:54.155 "superblock": true, 00:16:54.155 "num_base_bdevs": 3, 00:16:54.155 "num_base_bdevs_discovered": 3, 00:16:54.155 "num_base_bdevs_operational": 3, 00:16:54.155 "base_bdevs_list": [ 00:16:54.155 { 00:16:54.155 "name": "spare", 00:16:54.155 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:54.155 "is_configured": true, 00:16:54.155 "data_offset": 2048, 00:16:54.155 "data_size": 63488 00:16:54.155 }, 00:16:54.155 { 00:16:54.155 "name": "BaseBdev2", 00:16:54.155 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:54.155 "is_configured": true, 00:16:54.155 "data_offset": 2048, 00:16:54.155 "data_size": 63488 00:16:54.155 }, 00:16:54.155 { 00:16:54.155 "name": "BaseBdev3", 00:16:54.155 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:54.155 "is_configured": true, 00:16:54.155 "data_offset": 2048, 00:16:54.155 "data_size": 63488 00:16:54.155 } 00:16:54.155 ] 00:16:54.155 }' 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:54.155 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.414 [2024-11-19 10:11:08.421095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.414 "name": "raid_bdev1", 00:16:54.414 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:54.414 "strip_size_kb": 64, 00:16:54.414 "state": "online", 00:16:54.414 "raid_level": "raid5f", 00:16:54.414 "superblock": true, 00:16:54.414 "num_base_bdevs": 3, 00:16:54.414 "num_base_bdevs_discovered": 2, 00:16:54.414 "num_base_bdevs_operational": 2, 00:16:54.414 "base_bdevs_list": [ 00:16:54.414 { 00:16:54.414 "name": null, 00:16:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.414 "is_configured": false, 00:16:54.414 "data_offset": 0, 00:16:54.414 "data_size": 63488 00:16:54.414 }, 00:16:54.414 { 00:16:54.414 "name": "BaseBdev2", 00:16:54.414 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:54.414 "is_configured": true, 00:16:54.414 "data_offset": 2048, 00:16:54.414 "data_size": 63488 00:16:54.414 }, 00:16:54.414 { 00:16:54.414 "name": "BaseBdev3", 00:16:54.414 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:54.414 "is_configured": true, 00:16:54.414 "data_offset": 2048, 00:16:54.414 "data_size": 63488 00:16:54.414 } 00:16:54.414 ] 00:16:54.414 }' 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.414 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.981 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.981 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.981 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.981 [2024-11-19 10:11:08.917218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.981 [2024-11-19 10:11:08.917528] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.981 [2024-11-19 10:11:08.917566] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:54.981 [2024-11-19 10:11:08.917620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.981 [2024-11-19 10:11:08.933315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:54.981 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.981 10:11:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:54.981 [2024-11-19 10:11:08.941038] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.913 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.914 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.914 "name": "raid_bdev1", 00:16:55.914 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:55.914 "strip_size_kb": 64, 00:16:55.914 "state": "online", 00:16:55.914 "raid_level": "raid5f", 00:16:55.914 "superblock": true, 00:16:55.914 "num_base_bdevs": 3, 00:16:55.914 "num_base_bdevs_discovered": 3, 00:16:55.914 "num_base_bdevs_operational": 3, 00:16:55.914 "process": { 00:16:55.914 "type": "rebuild", 00:16:55.914 "target": "spare", 00:16:55.914 "progress": { 00:16:55.914 "blocks": 18432, 00:16:55.914 "percent": 14 00:16:55.914 } 00:16:55.914 }, 00:16:55.914 "base_bdevs_list": [ 00:16:55.914 { 00:16:55.914 "name": "spare", 00:16:55.914 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:55.914 "is_configured": true, 00:16:55.914 "data_offset": 2048, 00:16:55.914 "data_size": 63488 00:16:55.914 }, 00:16:55.914 { 00:16:55.914 "name": "BaseBdev2", 00:16:55.914 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:55.914 "is_configured": true, 00:16:55.914 "data_offset": 2048, 00:16:55.914 "data_size": 63488 00:16:55.914 }, 00:16:55.914 { 00:16:55.914 "name": "BaseBdev3", 00:16:55.914 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:55.914 "is_configured": true, 00:16:55.914 "data_offset": 2048, 00:16:55.914 "data_size": 63488 00:16:55.914 } 00:16:55.914 ] 00:16:55.914 }' 00:16:55.914 10:11:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.914 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.914 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.914 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.914 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:55.914 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.914 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.914 [2024-11-19 10:11:10.099557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.173 [2024-11-19 10:11:10.159868] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:56.173 [2024-11-19 10:11:10.160053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.173 [2024-11-19 10:11:10.160102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.173 [2024-11-19 10:11:10.160121] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.173 "name": "raid_bdev1", 00:16:56.173 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:56.173 "strip_size_kb": 64, 00:16:56.173 "state": "online", 00:16:56.173 "raid_level": "raid5f", 00:16:56.173 "superblock": true, 00:16:56.173 "num_base_bdevs": 3, 00:16:56.173 "num_base_bdevs_discovered": 2, 00:16:56.173 "num_base_bdevs_operational": 2, 00:16:56.173 "base_bdevs_list": [ 00:16:56.173 { 00:16:56.173 "name": null, 00:16:56.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.173 "is_configured": false, 00:16:56.173 "data_offset": 0, 00:16:56.173 "data_size": 63488 00:16:56.173 }, 00:16:56.173 { 00:16:56.173 "name": "BaseBdev2", 00:16:56.173 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:56.173 "is_configured": true, 00:16:56.173 "data_offset": 2048, 00:16:56.173 "data_size": 63488 00:16:56.173 }, 00:16:56.173 { 00:16:56.173 "name": "BaseBdev3", 00:16:56.173 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:56.173 "is_configured": true, 00:16:56.173 "data_offset": 2048, 00:16:56.173 "data_size": 63488 00:16:56.173 } 00:16:56.173 ] 00:16:56.173 }' 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.173 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.741 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:56.741 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.741 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.741 [2024-11-19 10:11:10.821493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:56.741 [2024-11-19 10:11:10.821613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.741 [2024-11-19 10:11:10.821651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:56.741 [2024-11-19 10:11:10.821674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.741 [2024-11-19 10:11:10.822404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.741 [2024-11-19 10:11:10.822462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:56.741 [2024-11-19 10:11:10.822615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:56.741 [2024-11-19 10:11:10.822641] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:56.741 [2024-11-19 10:11:10.822667] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:56.741 [2024-11-19 10:11:10.822714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.741 [2024-11-19 10:11:10.838666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:56.741 spare 00:16:56.741 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.741 10:11:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:56.741 [2024-11-19 10:11:10.846564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.676 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.677 "name": "raid_bdev1", 00:16:57.677 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:57.677 "strip_size_kb": 64, 00:16:57.677 "state": "online", 00:16:57.677 "raid_level": "raid5f", 00:16:57.677 "superblock": true, 00:16:57.677 "num_base_bdevs": 3, 00:16:57.677 "num_base_bdevs_discovered": 3, 00:16:57.677 "num_base_bdevs_operational": 3, 00:16:57.677 "process": { 00:16:57.677 "type": "rebuild", 00:16:57.677 "target": "spare", 00:16:57.677 "progress": { 00:16:57.677 "blocks": 18432, 00:16:57.677 "percent": 14 00:16:57.677 } 00:16:57.677 }, 00:16:57.677 "base_bdevs_list": [ 00:16:57.677 { 00:16:57.677 "name": "spare", 00:16:57.677 "uuid": "7c8e829b-8e47-5586-a7ec-244d54826ae9", 00:16:57.677 "is_configured": true, 00:16:57.677 "data_offset": 2048, 00:16:57.677 "data_size": 63488 00:16:57.677 }, 00:16:57.677 { 00:16:57.677 "name": "BaseBdev2", 00:16:57.677 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:57.677 "is_configured": true, 00:16:57.677 "data_offset": 2048, 00:16:57.677 "data_size": 63488 00:16:57.677 }, 00:16:57.677 { 00:16:57.677 "name": "BaseBdev3", 00:16:57.677 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:57.677 "is_configured": true, 00:16:57.677 "data_offset": 2048, 00:16:57.677 "data_size": 63488 00:16:57.677 } 00:16:57.677 ] 00:16:57.677 }' 00:16:57.677 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.936 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.936 10:11:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.936 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.936 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.937 [2024-11-19 10:11:12.016929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.937 [2024-11-19 10:11:12.065310] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.937 [2024-11-19 10:11:12.065441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.937 [2024-11-19 10:11:12.065474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.937 [2024-11-19 10:11:12.065486] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.937 "name": "raid_bdev1", 00:16:57.937 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:57.937 "strip_size_kb": 64, 00:16:57.937 "state": "online", 00:16:57.937 "raid_level": "raid5f", 00:16:57.937 "superblock": true, 00:16:57.937 "num_base_bdevs": 3, 00:16:57.937 "num_base_bdevs_discovered": 2, 00:16:57.937 "num_base_bdevs_operational": 2, 00:16:57.937 "base_bdevs_list": [ 00:16:57.937 { 00:16:57.937 "name": null, 00:16:57.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.937 "is_configured": false, 00:16:57.937 "data_offset": 0, 00:16:57.937 "data_size": 63488 00:16:57.937 }, 00:16:57.937 { 00:16:57.937 "name": "BaseBdev2", 00:16:57.937 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:57.937 "is_configured": true, 00:16:57.937 "data_offset": 2048, 00:16:57.937 "data_size": 63488 00:16:57.937 }, 00:16:57.937 { 00:16:57.937 "name": "BaseBdev3", 00:16:57.937 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:57.937 "is_configured": true, 00:16:57.937 "data_offset": 2048, 00:16:57.937 "data_size": 63488 00:16:57.937 } 00:16:57.937 ] 00:16:57.937 }' 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.937 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.503 "name": "raid_bdev1", 00:16:58.503 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:58.503 "strip_size_kb": 64, 00:16:58.503 "state": "online", 00:16:58.503 "raid_level": "raid5f", 00:16:58.503 "superblock": true, 00:16:58.503 "num_base_bdevs": 3, 00:16:58.503 "num_base_bdevs_discovered": 2, 00:16:58.503 "num_base_bdevs_operational": 2, 00:16:58.503 "base_bdevs_list": [ 00:16:58.503 { 00:16:58.503 "name": null, 00:16:58.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.503 "is_configured": false, 00:16:58.503 "data_offset": 0, 00:16:58.503 "data_size": 63488 00:16:58.503 }, 00:16:58.503 { 00:16:58.503 "name": "BaseBdev2", 00:16:58.503 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:58.503 "is_configured": true, 00:16:58.503 "data_offset": 2048, 00:16:58.503 "data_size": 63488 00:16:58.503 }, 00:16:58.503 { 00:16:58.503 "name": "BaseBdev3", 00:16:58.503 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:58.503 "is_configured": true, 00:16:58.503 "data_offset": 2048, 00:16:58.503 "data_size": 63488 00:16:58.503 } 00:16:58.503 ] 00:16:58.503 }' 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.503 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.762 [2024-11-19 10:11:12.751874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.762 [2024-11-19 10:11:12.751965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.762 [2024-11-19 10:11:12.752009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:58.762 [2024-11-19 10:11:12.752025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.762 [2024-11-19 10:11:12.752711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.762 [2024-11-19 10:11:12.752752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.762 [2024-11-19 10:11:12.752900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:58.762 [2024-11-19 10:11:12.752929] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:58.762 [2024-11-19 10:11:12.752956] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:58.762 [2024-11-19 10:11:12.752970] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:58.762 BaseBdev1 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.762 10:11:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.708 "name": "raid_bdev1", 00:16:59.708 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:16:59.708 "strip_size_kb": 64, 00:16:59.708 "state": "online", 00:16:59.708 "raid_level": "raid5f", 00:16:59.708 "superblock": true, 00:16:59.708 "num_base_bdevs": 3, 00:16:59.708 "num_base_bdevs_discovered": 2, 00:16:59.708 "num_base_bdevs_operational": 2, 00:16:59.708 "base_bdevs_list": [ 00:16:59.708 { 00:16:59.708 "name": null, 00:16:59.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.708 "is_configured": false, 00:16:59.708 "data_offset": 0, 00:16:59.708 "data_size": 63488 00:16:59.708 }, 00:16:59.708 { 00:16:59.708 "name": "BaseBdev2", 00:16:59.708 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:16:59.708 "is_configured": true, 00:16:59.708 "data_offset": 2048, 00:16:59.708 "data_size": 63488 00:16:59.708 }, 00:16:59.708 { 00:16:59.708 "name": "BaseBdev3", 00:16:59.708 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:16:59.708 "is_configured": true, 00:16:59.708 "data_offset": 2048, 00:16:59.708 "data_size": 63488 00:16:59.708 } 00:16:59.708 ] 00:16:59.708 }' 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.708 10:11:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.285 "name": "raid_bdev1", 00:17:00.285 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:17:00.285 "strip_size_kb": 64, 00:17:00.285 "state": "online", 00:17:00.285 "raid_level": "raid5f", 00:17:00.285 "superblock": true, 00:17:00.285 "num_base_bdevs": 3, 00:17:00.285 "num_base_bdevs_discovered": 2, 00:17:00.285 "num_base_bdevs_operational": 2, 00:17:00.285 "base_bdevs_list": [ 00:17:00.285 { 00:17:00.285 "name": null, 00:17:00.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.285 "is_configured": false, 00:17:00.285 "data_offset": 0, 00:17:00.285 "data_size": 63488 00:17:00.285 }, 00:17:00.285 { 00:17:00.285 "name": "BaseBdev2", 00:17:00.285 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:17:00.285 "is_configured": true, 00:17:00.285 "data_offset": 2048, 00:17:00.285 "data_size": 63488 00:17:00.285 }, 00:17:00.285 { 00:17:00.285 "name": "BaseBdev3", 00:17:00.285 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:17:00.285 "is_configured": true, 00:17:00.285 "data_offset": 2048, 00:17:00.285 "data_size": 63488 00:17:00.285 } 00:17:00.285 ] 00:17:00.285 }' 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.285 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.544 [2024-11-19 10:11:14.520605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.544 [2024-11-19 10:11:14.521023] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:00.544 [2024-11-19 10:11:14.521203] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:00.544 request: 00:17:00.544 { 00:17:00.544 "base_bdev": "BaseBdev1", 00:17:00.544 "raid_bdev": "raid_bdev1", 00:17:00.544 "method": "bdev_raid_add_base_bdev", 00:17:00.544 "req_id": 1 00:17:00.544 } 00:17:00.544 Got JSON-RPC error response 00:17:00.544 response: 00:17:00.544 { 00:17:00.544 "code": -22, 00:17:00.544 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:00.544 } 00:17:00.544 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:00.544 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:00.544 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:00.544 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:00.544 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:00.544 10:11:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.482 "name": "raid_bdev1", 00:17:01.482 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:17:01.482 "strip_size_kb": 64, 00:17:01.482 "state": "online", 00:17:01.482 "raid_level": "raid5f", 00:17:01.482 "superblock": true, 00:17:01.482 "num_base_bdevs": 3, 00:17:01.482 "num_base_bdevs_discovered": 2, 00:17:01.482 "num_base_bdevs_operational": 2, 00:17:01.482 "base_bdevs_list": [ 00:17:01.482 { 00:17:01.482 "name": null, 00:17:01.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.482 "is_configured": false, 00:17:01.482 "data_offset": 0, 00:17:01.482 "data_size": 63488 00:17:01.482 }, 00:17:01.482 { 00:17:01.482 "name": "BaseBdev2", 00:17:01.482 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:17:01.482 "is_configured": true, 00:17:01.482 "data_offset": 2048, 00:17:01.482 "data_size": 63488 00:17:01.482 }, 00:17:01.482 { 00:17:01.482 "name": "BaseBdev3", 00:17:01.482 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:17:01.482 "is_configured": true, 00:17:01.482 "data_offset": 2048, 00:17:01.482 "data_size": 63488 00:17:01.482 } 00:17:01.482 ] 00:17:01.482 }' 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.482 10:11:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.049 "name": "raid_bdev1", 00:17:02.049 "uuid": "dfa57555-6bd3-4cc4-91b6-5ef9fe6549ef", 00:17:02.049 "strip_size_kb": 64, 00:17:02.049 "state": "online", 00:17:02.049 "raid_level": "raid5f", 00:17:02.049 "superblock": true, 00:17:02.049 "num_base_bdevs": 3, 00:17:02.049 "num_base_bdevs_discovered": 2, 00:17:02.049 "num_base_bdevs_operational": 2, 00:17:02.049 "base_bdevs_list": [ 00:17:02.049 { 00:17:02.049 "name": null, 00:17:02.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.049 "is_configured": false, 00:17:02.049 "data_offset": 0, 00:17:02.049 "data_size": 63488 00:17:02.049 }, 00:17:02.049 { 00:17:02.049 "name": "BaseBdev2", 00:17:02.049 "uuid": "216778f8-10dc-5369-9b98-b5ee14c1a561", 00:17:02.049 "is_configured": true, 00:17:02.049 "data_offset": 2048, 00:17:02.049 "data_size": 63488 00:17:02.049 }, 00:17:02.049 { 00:17:02.049 "name": "BaseBdev3", 00:17:02.049 "uuid": "6ce3670b-8da6-5526-bb8c-c2e540094eee", 00:17:02.049 "is_configured": true, 00:17:02.049 "data_offset": 2048, 00:17:02.049 "data_size": 63488 00:17:02.049 } 00:17:02.049 ] 00:17:02.049 }' 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82383 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82383 ']' 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82383 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.049 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82383 00:17:02.308 killing process with pid 82383 00:17:02.308 Received shutdown signal, test time was about 60.000000 seconds 00:17:02.308 00:17:02.308 Latency(us) 00:17:02.308 [2024-11-19T10:11:16.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.308 [2024-11-19T10:11:16.540Z] =================================================================================================================== 00:17:02.308 [2024-11-19T10:11:16.540Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.308 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.308 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.308 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82383' 00:17:02.308 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82383 00:17:02.308 10:11:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82383 00:17:02.308 [2024-11-19 10:11:16.306006] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.308 [2024-11-19 10:11:16.306200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.308 [2024-11-19 10:11:16.306299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.308 [2024-11-19 10:11:16.306321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:02.566 [2024-11-19 10:11:16.693651] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.939 10:11:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:03.939 00:17:03.939 real 0m25.483s 00:17:03.939 user 0m33.957s 00:17:03.939 sys 0m2.765s 00:17:03.939 10:11:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.939 ************************************ 00:17:03.939 END TEST raid5f_rebuild_test_sb 00:17:03.939 ************************************ 00:17:03.939 10:11:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.939 10:11:17 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:03.939 10:11:17 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:03.939 10:11:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:03.939 10:11:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.939 10:11:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.939 ************************************ 00:17:03.939 START TEST raid5f_state_function_test 00:17:03.939 ************************************ 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83149 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:03.939 Process raid pid: 83149 00:17:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83149' 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83149 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83149 ']' 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.939 10:11:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.939 [2024-11-19 10:11:18.011226] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:03.939 [2024-11-19 10:11:18.011691] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.197 [2024-11-19 10:11:18.217843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.197 [2024-11-19 10:11:18.375674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.455 [2024-11-19 10:11:18.607183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.455 [2024-11-19 10:11:18.607256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.020 [2024-11-19 10:11:19.098844] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.020 [2024-11-19 10:11:19.098924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.020 [2024-11-19 10:11:19.098944] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.020 [2024-11-19 10:11:19.098961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.020 [2024-11-19 10:11:19.098971] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.020 [2024-11-19 10:11:19.098986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.020 [2024-11-19 10:11:19.098997] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.020 [2024-11-19 10:11:19.099012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.020 "name": "Existed_Raid", 00:17:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.020 "strip_size_kb": 64, 00:17:05.020 "state": "configuring", 00:17:05.020 "raid_level": "raid5f", 00:17:05.020 "superblock": false, 00:17:05.020 "num_base_bdevs": 4, 00:17:05.020 "num_base_bdevs_discovered": 0, 00:17:05.020 "num_base_bdevs_operational": 4, 00:17:05.020 "base_bdevs_list": [ 00:17:05.020 { 00:17:05.020 "name": "BaseBdev1", 00:17:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.020 "is_configured": false, 00:17:05.020 "data_offset": 0, 00:17:05.020 "data_size": 0 00:17:05.020 }, 00:17:05.020 { 00:17:05.020 "name": "BaseBdev2", 00:17:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.020 "is_configured": false, 00:17:05.020 "data_offset": 0, 00:17:05.020 "data_size": 0 00:17:05.020 }, 00:17:05.020 { 00:17:05.020 "name": "BaseBdev3", 00:17:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.020 "is_configured": false, 00:17:05.020 "data_offset": 0, 00:17:05.020 "data_size": 0 00:17:05.020 }, 00:17:05.020 { 00:17:05.020 "name": "BaseBdev4", 00:17:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.020 "is_configured": false, 00:17:05.020 "data_offset": 0, 00:17:05.020 "data_size": 0 00:17:05.020 } 00:17:05.020 ] 00:17:05.020 }' 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.020 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 [2024-11-19 10:11:19.626965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.584 [2024-11-19 10:11:19.627037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 [2024-11-19 10:11:19.638980] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.584 [2024-11-19 10:11:19.639084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.584 [2024-11-19 10:11:19.639110] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.584 [2024-11-19 10:11:19.639127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.584 [2024-11-19 10:11:19.639137] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.584 [2024-11-19 10:11:19.639152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.584 [2024-11-19 10:11:19.639162] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.584 [2024-11-19 10:11:19.639177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 [2024-11-19 10:11:19.689021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.584 BaseBdev1 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.584 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 [ 00:17:05.584 { 00:17:05.584 "name": "BaseBdev1", 00:17:05.584 "aliases": [ 00:17:05.584 "6e97e903-3f5a-456d-ba97-be3303012879" 00:17:05.584 ], 00:17:05.584 "product_name": "Malloc disk", 00:17:05.584 "block_size": 512, 00:17:05.584 "num_blocks": 65536, 00:17:05.584 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:05.584 "assigned_rate_limits": { 00:17:05.584 "rw_ios_per_sec": 0, 00:17:05.584 "rw_mbytes_per_sec": 0, 00:17:05.584 "r_mbytes_per_sec": 0, 00:17:05.585 "w_mbytes_per_sec": 0 00:17:05.585 }, 00:17:05.585 "claimed": true, 00:17:05.585 "claim_type": "exclusive_write", 00:17:05.585 "zoned": false, 00:17:05.585 "supported_io_types": { 00:17:05.585 "read": true, 00:17:05.585 "write": true, 00:17:05.585 "unmap": true, 00:17:05.585 "flush": true, 00:17:05.585 "reset": true, 00:17:05.585 "nvme_admin": false, 00:17:05.585 "nvme_io": false, 00:17:05.585 "nvme_io_md": false, 00:17:05.585 "write_zeroes": true, 00:17:05.585 "zcopy": true, 00:17:05.585 "get_zone_info": false, 00:17:05.585 "zone_management": false, 00:17:05.585 "zone_append": false, 00:17:05.585 "compare": false, 00:17:05.585 "compare_and_write": false, 00:17:05.585 "abort": true, 00:17:05.585 "seek_hole": false, 00:17:05.585 "seek_data": false, 00:17:05.585 "copy": true, 00:17:05.585 "nvme_iov_md": false 00:17:05.585 }, 00:17:05.585 "memory_domains": [ 00:17:05.585 { 00:17:05.585 "dma_device_id": "system", 00:17:05.585 "dma_device_type": 1 00:17:05.585 }, 00:17:05.585 { 00:17:05.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.585 "dma_device_type": 2 00:17:05.585 } 00:17:05.585 ], 00:17:05.585 "driver_specific": {} 00:17:05.585 } 00:17:05.585 ] 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.585 "name": "Existed_Raid", 00:17:05.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.585 "strip_size_kb": 64, 00:17:05.585 "state": "configuring", 00:17:05.585 "raid_level": "raid5f", 00:17:05.585 "superblock": false, 00:17:05.585 "num_base_bdevs": 4, 00:17:05.585 "num_base_bdevs_discovered": 1, 00:17:05.585 "num_base_bdevs_operational": 4, 00:17:05.585 "base_bdevs_list": [ 00:17:05.585 { 00:17:05.585 "name": "BaseBdev1", 00:17:05.585 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:05.585 "is_configured": true, 00:17:05.585 "data_offset": 0, 00:17:05.585 "data_size": 65536 00:17:05.585 }, 00:17:05.585 { 00:17:05.585 "name": "BaseBdev2", 00:17:05.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.585 "is_configured": false, 00:17:05.585 "data_offset": 0, 00:17:05.585 "data_size": 0 00:17:05.585 }, 00:17:05.585 { 00:17:05.585 "name": "BaseBdev3", 00:17:05.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.585 "is_configured": false, 00:17:05.585 "data_offset": 0, 00:17:05.585 "data_size": 0 00:17:05.585 }, 00:17:05.585 { 00:17:05.585 "name": "BaseBdev4", 00:17:05.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.585 "is_configured": false, 00:17:05.585 "data_offset": 0, 00:17:05.585 "data_size": 0 00:17:05.585 } 00:17:05.585 ] 00:17:05.585 }' 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.585 10:11:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.150 [2024-11-19 10:11:20.261258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.150 [2024-11-19 10:11:20.261336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.150 [2024-11-19 10:11:20.269320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.150 [2024-11-19 10:11:20.272222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.150 [2024-11-19 10:11:20.272408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.150 [2024-11-19 10:11:20.272566] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.150 [2024-11-19 10:11:20.272726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.150 [2024-11-19 10:11:20.272885] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.150 [2024-11-19 10:11:20.272949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.150 "name": "Existed_Raid", 00:17:06.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.150 "strip_size_kb": 64, 00:17:06.150 "state": "configuring", 00:17:06.150 "raid_level": "raid5f", 00:17:06.150 "superblock": false, 00:17:06.150 "num_base_bdevs": 4, 00:17:06.150 "num_base_bdevs_discovered": 1, 00:17:06.150 "num_base_bdevs_operational": 4, 00:17:06.150 "base_bdevs_list": [ 00:17:06.150 { 00:17:06.150 "name": "BaseBdev1", 00:17:06.150 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:06.150 "is_configured": true, 00:17:06.150 "data_offset": 0, 00:17:06.150 "data_size": 65536 00:17:06.150 }, 00:17:06.150 { 00:17:06.150 "name": "BaseBdev2", 00:17:06.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.150 "is_configured": false, 00:17:06.150 "data_offset": 0, 00:17:06.150 "data_size": 0 00:17:06.150 }, 00:17:06.150 { 00:17:06.150 "name": "BaseBdev3", 00:17:06.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.150 "is_configured": false, 00:17:06.150 "data_offset": 0, 00:17:06.150 "data_size": 0 00:17:06.150 }, 00:17:06.150 { 00:17:06.150 "name": "BaseBdev4", 00:17:06.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.150 "is_configured": false, 00:17:06.150 "data_offset": 0, 00:17:06.150 "data_size": 0 00:17:06.150 } 00:17:06.150 ] 00:17:06.150 }' 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.150 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.717 [2024-11-19 10:11:20.812294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.717 BaseBdev2 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.717 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.718 [ 00:17:06.718 { 00:17:06.718 "name": "BaseBdev2", 00:17:06.718 "aliases": [ 00:17:06.718 "4772b6a7-e2fe-4be0-b841-d0bff557af89" 00:17:06.718 ], 00:17:06.718 "product_name": "Malloc disk", 00:17:06.718 "block_size": 512, 00:17:06.718 "num_blocks": 65536, 00:17:06.718 "uuid": "4772b6a7-e2fe-4be0-b841-d0bff557af89", 00:17:06.718 "assigned_rate_limits": { 00:17:06.718 "rw_ios_per_sec": 0, 00:17:06.718 "rw_mbytes_per_sec": 0, 00:17:06.718 "r_mbytes_per_sec": 0, 00:17:06.718 "w_mbytes_per_sec": 0 00:17:06.718 }, 00:17:06.718 "claimed": true, 00:17:06.718 "claim_type": "exclusive_write", 00:17:06.718 "zoned": false, 00:17:06.718 "supported_io_types": { 00:17:06.718 "read": true, 00:17:06.718 "write": true, 00:17:06.718 "unmap": true, 00:17:06.718 "flush": true, 00:17:06.718 "reset": true, 00:17:06.718 "nvme_admin": false, 00:17:06.718 "nvme_io": false, 00:17:06.718 "nvme_io_md": false, 00:17:06.718 "write_zeroes": true, 00:17:06.718 "zcopy": true, 00:17:06.718 "get_zone_info": false, 00:17:06.718 "zone_management": false, 00:17:06.718 "zone_append": false, 00:17:06.718 "compare": false, 00:17:06.718 "compare_and_write": false, 00:17:06.718 "abort": true, 00:17:06.718 "seek_hole": false, 00:17:06.718 "seek_data": false, 00:17:06.718 "copy": true, 00:17:06.718 "nvme_iov_md": false 00:17:06.718 }, 00:17:06.718 "memory_domains": [ 00:17:06.718 { 00:17:06.718 "dma_device_id": "system", 00:17:06.718 "dma_device_type": 1 00:17:06.718 }, 00:17:06.718 { 00:17:06.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.718 "dma_device_type": 2 00:17:06.718 } 00:17:06.718 ], 00:17:06.718 "driver_specific": {} 00:17:06.718 } 00:17:06.718 ] 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.718 "name": "Existed_Raid", 00:17:06.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.718 "strip_size_kb": 64, 00:17:06.718 "state": "configuring", 00:17:06.718 "raid_level": "raid5f", 00:17:06.718 "superblock": false, 00:17:06.718 "num_base_bdevs": 4, 00:17:06.718 "num_base_bdevs_discovered": 2, 00:17:06.718 "num_base_bdevs_operational": 4, 00:17:06.718 "base_bdevs_list": [ 00:17:06.718 { 00:17:06.718 "name": "BaseBdev1", 00:17:06.718 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:06.718 "is_configured": true, 00:17:06.718 "data_offset": 0, 00:17:06.718 "data_size": 65536 00:17:06.718 }, 00:17:06.718 { 00:17:06.718 "name": "BaseBdev2", 00:17:06.718 "uuid": "4772b6a7-e2fe-4be0-b841-d0bff557af89", 00:17:06.718 "is_configured": true, 00:17:06.718 "data_offset": 0, 00:17:06.718 "data_size": 65536 00:17:06.718 }, 00:17:06.718 { 00:17:06.718 "name": "BaseBdev3", 00:17:06.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.718 "is_configured": false, 00:17:06.718 "data_offset": 0, 00:17:06.718 "data_size": 0 00:17:06.718 }, 00:17:06.718 { 00:17:06.718 "name": "BaseBdev4", 00:17:06.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.718 "is_configured": false, 00:17:06.718 "data_offset": 0, 00:17:06.718 "data_size": 0 00:17:06.718 } 00:17:06.718 ] 00:17:06.718 }' 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.718 10:11:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.285 [2024-11-19 10:11:21.473168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.285 BaseBdev3 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.285 [ 00:17:07.285 { 00:17:07.285 "name": "BaseBdev3", 00:17:07.285 "aliases": [ 00:17:07.285 "a2812866-3145-40de-a01b-f1aa3a0fec6e" 00:17:07.285 ], 00:17:07.285 "product_name": "Malloc disk", 00:17:07.285 "block_size": 512, 00:17:07.285 "num_blocks": 65536, 00:17:07.285 "uuid": "a2812866-3145-40de-a01b-f1aa3a0fec6e", 00:17:07.285 "assigned_rate_limits": { 00:17:07.285 "rw_ios_per_sec": 0, 00:17:07.285 "rw_mbytes_per_sec": 0, 00:17:07.285 "r_mbytes_per_sec": 0, 00:17:07.285 "w_mbytes_per_sec": 0 00:17:07.285 }, 00:17:07.285 "claimed": true, 00:17:07.285 "claim_type": "exclusive_write", 00:17:07.285 "zoned": false, 00:17:07.285 "supported_io_types": { 00:17:07.285 "read": true, 00:17:07.285 "write": true, 00:17:07.285 "unmap": true, 00:17:07.285 "flush": true, 00:17:07.285 "reset": true, 00:17:07.285 "nvme_admin": false, 00:17:07.285 "nvme_io": false, 00:17:07.285 "nvme_io_md": false, 00:17:07.285 "write_zeroes": true, 00:17:07.285 "zcopy": true, 00:17:07.285 "get_zone_info": false, 00:17:07.285 "zone_management": false, 00:17:07.285 "zone_append": false, 00:17:07.285 "compare": false, 00:17:07.285 "compare_and_write": false, 00:17:07.285 "abort": true, 00:17:07.285 "seek_hole": false, 00:17:07.285 "seek_data": false, 00:17:07.285 "copy": true, 00:17:07.285 "nvme_iov_md": false 00:17:07.285 }, 00:17:07.285 "memory_domains": [ 00:17:07.285 { 00:17:07.285 "dma_device_id": "system", 00:17:07.285 "dma_device_type": 1 00:17:07.285 }, 00:17:07.285 { 00:17:07.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.285 "dma_device_type": 2 00:17:07.285 } 00:17:07.285 ], 00:17:07.285 "driver_specific": {} 00:17:07.285 } 00:17:07.285 ] 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.285 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.544 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.544 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.544 "name": "Existed_Raid", 00:17:07.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.544 "strip_size_kb": 64, 00:17:07.544 "state": "configuring", 00:17:07.544 "raid_level": "raid5f", 00:17:07.544 "superblock": false, 00:17:07.544 "num_base_bdevs": 4, 00:17:07.544 "num_base_bdevs_discovered": 3, 00:17:07.544 "num_base_bdevs_operational": 4, 00:17:07.544 "base_bdevs_list": [ 00:17:07.544 { 00:17:07.544 "name": "BaseBdev1", 00:17:07.544 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:07.544 "is_configured": true, 00:17:07.544 "data_offset": 0, 00:17:07.544 "data_size": 65536 00:17:07.544 }, 00:17:07.544 { 00:17:07.544 "name": "BaseBdev2", 00:17:07.544 "uuid": "4772b6a7-e2fe-4be0-b841-d0bff557af89", 00:17:07.544 "is_configured": true, 00:17:07.544 "data_offset": 0, 00:17:07.544 "data_size": 65536 00:17:07.544 }, 00:17:07.544 { 00:17:07.544 "name": "BaseBdev3", 00:17:07.544 "uuid": "a2812866-3145-40de-a01b-f1aa3a0fec6e", 00:17:07.544 "is_configured": true, 00:17:07.544 "data_offset": 0, 00:17:07.544 "data_size": 65536 00:17:07.544 }, 00:17:07.544 { 00:17:07.544 "name": "BaseBdev4", 00:17:07.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.544 "is_configured": false, 00:17:07.544 "data_offset": 0, 00:17:07.544 "data_size": 0 00:17:07.544 } 00:17:07.544 ] 00:17:07.544 }' 00:17:07.544 10:11:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.544 10:11:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.811 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:07.811 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.811 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.069 [2024-11-19 10:11:22.051778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:08.069 [2024-11-19 10:11:22.052211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.069 [2024-11-19 10:11:22.052272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:08.069 [2024-11-19 10:11:22.052778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.070 [2024-11-19 10:11:22.060061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.070 [2024-11-19 10:11:22.060116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:08.070 [2024-11-19 10:11:22.060561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.070 BaseBdev4 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.070 [ 00:17:08.070 { 00:17:08.070 "name": "BaseBdev4", 00:17:08.070 "aliases": [ 00:17:08.070 "e44fd8d9-0838-4ddd-8649-b1cd4e0e5636" 00:17:08.070 ], 00:17:08.070 "product_name": "Malloc disk", 00:17:08.070 "block_size": 512, 00:17:08.070 "num_blocks": 65536, 00:17:08.070 "uuid": "e44fd8d9-0838-4ddd-8649-b1cd4e0e5636", 00:17:08.070 "assigned_rate_limits": { 00:17:08.070 "rw_ios_per_sec": 0, 00:17:08.070 "rw_mbytes_per_sec": 0, 00:17:08.070 "r_mbytes_per_sec": 0, 00:17:08.070 "w_mbytes_per_sec": 0 00:17:08.070 }, 00:17:08.070 "claimed": true, 00:17:08.070 "claim_type": "exclusive_write", 00:17:08.070 "zoned": false, 00:17:08.070 "supported_io_types": { 00:17:08.070 "read": true, 00:17:08.070 "write": true, 00:17:08.070 "unmap": true, 00:17:08.070 "flush": true, 00:17:08.070 "reset": true, 00:17:08.070 "nvme_admin": false, 00:17:08.070 "nvme_io": false, 00:17:08.070 "nvme_io_md": false, 00:17:08.070 "write_zeroes": true, 00:17:08.070 "zcopy": true, 00:17:08.070 "get_zone_info": false, 00:17:08.070 "zone_management": false, 00:17:08.070 "zone_append": false, 00:17:08.070 "compare": false, 00:17:08.070 "compare_and_write": false, 00:17:08.070 "abort": true, 00:17:08.070 "seek_hole": false, 00:17:08.070 "seek_data": false, 00:17:08.070 "copy": true, 00:17:08.070 "nvme_iov_md": false 00:17:08.070 }, 00:17:08.070 "memory_domains": [ 00:17:08.070 { 00:17:08.070 "dma_device_id": "system", 00:17:08.070 "dma_device_type": 1 00:17:08.070 }, 00:17:08.070 { 00:17:08.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.070 "dma_device_type": 2 00:17:08.070 } 00:17:08.070 ], 00:17:08.070 "driver_specific": {} 00:17:08.070 } 00:17:08.070 ] 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.070 "name": "Existed_Raid", 00:17:08.070 "uuid": "3f6f4990-b88d-4102-9cda-0aa46df20749", 00:17:08.070 "strip_size_kb": 64, 00:17:08.070 "state": "online", 00:17:08.070 "raid_level": "raid5f", 00:17:08.070 "superblock": false, 00:17:08.070 "num_base_bdevs": 4, 00:17:08.070 "num_base_bdevs_discovered": 4, 00:17:08.070 "num_base_bdevs_operational": 4, 00:17:08.070 "base_bdevs_list": [ 00:17:08.070 { 00:17:08.070 "name": "BaseBdev1", 00:17:08.070 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:08.070 "is_configured": true, 00:17:08.070 "data_offset": 0, 00:17:08.070 "data_size": 65536 00:17:08.070 }, 00:17:08.070 { 00:17:08.070 "name": "BaseBdev2", 00:17:08.070 "uuid": "4772b6a7-e2fe-4be0-b841-d0bff557af89", 00:17:08.070 "is_configured": true, 00:17:08.070 "data_offset": 0, 00:17:08.070 "data_size": 65536 00:17:08.070 }, 00:17:08.070 { 00:17:08.070 "name": "BaseBdev3", 00:17:08.070 "uuid": "a2812866-3145-40de-a01b-f1aa3a0fec6e", 00:17:08.070 "is_configured": true, 00:17:08.070 "data_offset": 0, 00:17:08.070 "data_size": 65536 00:17:08.070 }, 00:17:08.070 { 00:17:08.070 "name": "BaseBdev4", 00:17:08.070 "uuid": "e44fd8d9-0838-4ddd-8649-b1cd4e0e5636", 00:17:08.070 "is_configured": true, 00:17:08.070 "data_offset": 0, 00:17:08.070 "data_size": 65536 00:17:08.070 } 00:17:08.070 ] 00:17:08.070 }' 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.070 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.638 [2024-11-19 10:11:22.605112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.638 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.638 "name": "Existed_Raid", 00:17:08.638 "aliases": [ 00:17:08.638 "3f6f4990-b88d-4102-9cda-0aa46df20749" 00:17:08.638 ], 00:17:08.638 "product_name": "Raid Volume", 00:17:08.638 "block_size": 512, 00:17:08.638 "num_blocks": 196608, 00:17:08.638 "uuid": "3f6f4990-b88d-4102-9cda-0aa46df20749", 00:17:08.638 "assigned_rate_limits": { 00:17:08.638 "rw_ios_per_sec": 0, 00:17:08.638 "rw_mbytes_per_sec": 0, 00:17:08.638 "r_mbytes_per_sec": 0, 00:17:08.638 "w_mbytes_per_sec": 0 00:17:08.638 }, 00:17:08.638 "claimed": false, 00:17:08.638 "zoned": false, 00:17:08.638 "supported_io_types": { 00:17:08.638 "read": true, 00:17:08.638 "write": true, 00:17:08.638 "unmap": false, 00:17:08.638 "flush": false, 00:17:08.638 "reset": true, 00:17:08.638 "nvme_admin": false, 00:17:08.638 "nvme_io": false, 00:17:08.638 "nvme_io_md": false, 00:17:08.638 "write_zeroes": true, 00:17:08.638 "zcopy": false, 00:17:08.638 "get_zone_info": false, 00:17:08.638 "zone_management": false, 00:17:08.638 "zone_append": false, 00:17:08.638 "compare": false, 00:17:08.638 "compare_and_write": false, 00:17:08.638 "abort": false, 00:17:08.638 "seek_hole": false, 00:17:08.638 "seek_data": false, 00:17:08.638 "copy": false, 00:17:08.638 "nvme_iov_md": false 00:17:08.638 }, 00:17:08.638 "driver_specific": { 00:17:08.638 "raid": { 00:17:08.638 "uuid": "3f6f4990-b88d-4102-9cda-0aa46df20749", 00:17:08.638 "strip_size_kb": 64, 00:17:08.638 "state": "online", 00:17:08.638 "raid_level": "raid5f", 00:17:08.638 "superblock": false, 00:17:08.638 "num_base_bdevs": 4, 00:17:08.639 "num_base_bdevs_discovered": 4, 00:17:08.639 "num_base_bdevs_operational": 4, 00:17:08.639 "base_bdevs_list": [ 00:17:08.639 { 00:17:08.639 "name": "BaseBdev1", 00:17:08.639 "uuid": "6e97e903-3f5a-456d-ba97-be3303012879", 00:17:08.639 "is_configured": true, 00:17:08.639 "data_offset": 0, 00:17:08.639 "data_size": 65536 00:17:08.639 }, 00:17:08.639 { 00:17:08.639 "name": "BaseBdev2", 00:17:08.639 "uuid": "4772b6a7-e2fe-4be0-b841-d0bff557af89", 00:17:08.639 "is_configured": true, 00:17:08.639 "data_offset": 0, 00:17:08.639 "data_size": 65536 00:17:08.639 }, 00:17:08.639 { 00:17:08.639 "name": "BaseBdev3", 00:17:08.639 "uuid": "a2812866-3145-40de-a01b-f1aa3a0fec6e", 00:17:08.639 "is_configured": true, 00:17:08.639 "data_offset": 0, 00:17:08.639 "data_size": 65536 00:17:08.639 }, 00:17:08.639 { 00:17:08.639 "name": "BaseBdev4", 00:17:08.639 "uuid": "e44fd8d9-0838-4ddd-8649-b1cd4e0e5636", 00:17:08.639 "is_configured": true, 00:17:08.639 "data_offset": 0, 00:17:08.639 "data_size": 65536 00:17:08.639 } 00:17:08.639 ] 00:17:08.639 } 00:17:08.639 } 00:17:08.639 }' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:08.639 BaseBdev2 00:17:08.639 BaseBdev3 00:17:08.639 BaseBdev4' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.898 10:11:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.898 [2024-11-19 10:11:22.937011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.898 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.898 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:08.898 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:08.898 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:08.898 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.899 "name": "Existed_Raid", 00:17:08.899 "uuid": "3f6f4990-b88d-4102-9cda-0aa46df20749", 00:17:08.899 "strip_size_kb": 64, 00:17:08.899 "state": "online", 00:17:08.899 "raid_level": "raid5f", 00:17:08.899 "superblock": false, 00:17:08.899 "num_base_bdevs": 4, 00:17:08.899 "num_base_bdevs_discovered": 3, 00:17:08.899 "num_base_bdevs_operational": 3, 00:17:08.899 "base_bdevs_list": [ 00:17:08.899 { 00:17:08.899 "name": null, 00:17:08.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.899 "is_configured": false, 00:17:08.899 "data_offset": 0, 00:17:08.899 "data_size": 65536 00:17:08.899 }, 00:17:08.899 { 00:17:08.899 "name": "BaseBdev2", 00:17:08.899 "uuid": "4772b6a7-e2fe-4be0-b841-d0bff557af89", 00:17:08.899 "is_configured": true, 00:17:08.899 "data_offset": 0, 00:17:08.899 "data_size": 65536 00:17:08.899 }, 00:17:08.899 { 00:17:08.899 "name": "BaseBdev3", 00:17:08.899 "uuid": "a2812866-3145-40de-a01b-f1aa3a0fec6e", 00:17:08.899 "is_configured": true, 00:17:08.899 "data_offset": 0, 00:17:08.899 "data_size": 65536 00:17:08.899 }, 00:17:08.899 { 00:17:08.899 "name": "BaseBdev4", 00:17:08.899 "uuid": "e44fd8d9-0838-4ddd-8649-b1cd4e0e5636", 00:17:08.899 "is_configured": true, 00:17:08.899 "data_offset": 0, 00:17:08.899 "data_size": 65536 00:17:08.899 } 00:17:08.899 ] 00:17:08.899 }' 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.899 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.466 [2024-11-19 10:11:23.577661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.466 [2024-11-19 10:11:23.577832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.466 [2024-11-19 10:11:23.668999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.466 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.725 [2024-11-19 10:11:23.729056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.725 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.725 [2024-11-19 10:11:23.871364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:09.725 [2024-11-19 10:11:23.871435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 10:11:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 BaseBdev2 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 [ 00:17:09.985 { 00:17:09.985 "name": "BaseBdev2", 00:17:09.985 "aliases": [ 00:17:09.985 "91943832-78be-4cb9-a7cf-db460cdb4a99" 00:17:09.985 ], 00:17:09.985 "product_name": "Malloc disk", 00:17:09.985 "block_size": 512, 00:17:09.985 "num_blocks": 65536, 00:17:09.985 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:09.985 "assigned_rate_limits": { 00:17:09.985 "rw_ios_per_sec": 0, 00:17:09.985 "rw_mbytes_per_sec": 0, 00:17:09.985 "r_mbytes_per_sec": 0, 00:17:09.985 "w_mbytes_per_sec": 0 00:17:09.985 }, 00:17:09.985 "claimed": false, 00:17:09.985 "zoned": false, 00:17:09.985 "supported_io_types": { 00:17:09.985 "read": true, 00:17:09.985 "write": true, 00:17:09.985 "unmap": true, 00:17:09.985 "flush": true, 00:17:09.985 "reset": true, 00:17:09.985 "nvme_admin": false, 00:17:09.985 "nvme_io": false, 00:17:09.985 "nvme_io_md": false, 00:17:09.985 "write_zeroes": true, 00:17:09.985 "zcopy": true, 00:17:09.985 "get_zone_info": false, 00:17:09.985 "zone_management": false, 00:17:09.985 "zone_append": false, 00:17:09.985 "compare": false, 00:17:09.985 "compare_and_write": false, 00:17:09.985 "abort": true, 00:17:09.985 "seek_hole": false, 00:17:09.985 "seek_data": false, 00:17:09.985 "copy": true, 00:17:09.985 "nvme_iov_md": false 00:17:09.985 }, 00:17:09.985 "memory_domains": [ 00:17:09.985 { 00:17:09.985 "dma_device_id": "system", 00:17:09.985 "dma_device_type": 1 00:17:09.985 }, 00:17:09.985 { 00:17:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.985 "dma_device_type": 2 00:17:09.985 } 00:17:09.985 ], 00:17:09.985 "driver_specific": {} 00:17:09.985 } 00:17:09.985 ] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 BaseBdev3 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.985 [ 00:17:09.985 { 00:17:09.985 "name": "BaseBdev3", 00:17:09.985 "aliases": [ 00:17:09.985 "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a" 00:17:09.985 ], 00:17:09.985 "product_name": "Malloc disk", 00:17:09.985 "block_size": 512, 00:17:09.985 "num_blocks": 65536, 00:17:09.985 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:09.985 "assigned_rate_limits": { 00:17:09.985 "rw_ios_per_sec": 0, 00:17:09.985 "rw_mbytes_per_sec": 0, 00:17:09.985 "r_mbytes_per_sec": 0, 00:17:09.985 "w_mbytes_per_sec": 0 00:17:09.985 }, 00:17:09.985 "claimed": false, 00:17:09.985 "zoned": false, 00:17:09.985 "supported_io_types": { 00:17:09.985 "read": true, 00:17:09.985 "write": true, 00:17:09.985 "unmap": true, 00:17:09.985 "flush": true, 00:17:09.985 "reset": true, 00:17:09.985 "nvme_admin": false, 00:17:09.985 "nvme_io": false, 00:17:09.985 "nvme_io_md": false, 00:17:09.985 "write_zeroes": true, 00:17:09.985 "zcopy": true, 00:17:09.985 "get_zone_info": false, 00:17:09.985 "zone_management": false, 00:17:09.985 "zone_append": false, 00:17:09.985 "compare": false, 00:17:09.985 "compare_and_write": false, 00:17:09.985 "abort": true, 00:17:09.985 "seek_hole": false, 00:17:09.985 "seek_data": false, 00:17:09.985 "copy": true, 00:17:09.985 "nvme_iov_md": false 00:17:09.985 }, 00:17:09.985 "memory_domains": [ 00:17:09.985 { 00:17:09.985 "dma_device_id": "system", 00:17:09.985 "dma_device_type": 1 00:17:09.985 }, 00:17:09.985 { 00:17:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.985 "dma_device_type": 2 00:17:09.985 } 00:17:09.985 ], 00:17:09.985 "driver_specific": {} 00:17:09.985 } 00:17:09.985 ] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.985 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.245 BaseBdev4 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.245 [ 00:17:10.245 { 00:17:10.245 "name": "BaseBdev4", 00:17:10.245 "aliases": [ 00:17:10.245 "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58" 00:17:10.245 ], 00:17:10.245 "product_name": "Malloc disk", 00:17:10.245 "block_size": 512, 00:17:10.245 "num_blocks": 65536, 00:17:10.245 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:10.245 "assigned_rate_limits": { 00:17:10.245 "rw_ios_per_sec": 0, 00:17:10.245 "rw_mbytes_per_sec": 0, 00:17:10.245 "r_mbytes_per_sec": 0, 00:17:10.245 "w_mbytes_per_sec": 0 00:17:10.245 }, 00:17:10.245 "claimed": false, 00:17:10.245 "zoned": false, 00:17:10.245 "supported_io_types": { 00:17:10.245 "read": true, 00:17:10.245 "write": true, 00:17:10.245 "unmap": true, 00:17:10.245 "flush": true, 00:17:10.245 "reset": true, 00:17:10.245 "nvme_admin": false, 00:17:10.245 "nvme_io": false, 00:17:10.245 "nvme_io_md": false, 00:17:10.245 "write_zeroes": true, 00:17:10.245 "zcopy": true, 00:17:10.245 "get_zone_info": false, 00:17:10.245 "zone_management": false, 00:17:10.245 "zone_append": false, 00:17:10.245 "compare": false, 00:17:10.245 "compare_and_write": false, 00:17:10.245 "abort": true, 00:17:10.245 "seek_hole": false, 00:17:10.245 "seek_data": false, 00:17:10.245 "copy": true, 00:17:10.245 "nvme_iov_md": false 00:17:10.245 }, 00:17:10.245 "memory_domains": [ 00:17:10.245 { 00:17:10.245 "dma_device_id": "system", 00:17:10.245 "dma_device_type": 1 00:17:10.245 }, 00:17:10.245 { 00:17:10.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.245 "dma_device_type": 2 00:17:10.245 } 00:17:10.245 ], 00:17:10.245 "driver_specific": {} 00:17:10.245 } 00:17:10.245 ] 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.245 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.245 [2024-11-19 10:11:24.265869] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.245 [2024-11-19 10:11:24.265929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.245 [2024-11-19 10:11:24.265966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.246 [2024-11-19 10:11:24.268552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.246 [2024-11-19 10:11:24.268628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.246 "name": "Existed_Raid", 00:17:10.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.246 "strip_size_kb": 64, 00:17:10.246 "state": "configuring", 00:17:10.246 "raid_level": "raid5f", 00:17:10.246 "superblock": false, 00:17:10.246 "num_base_bdevs": 4, 00:17:10.246 "num_base_bdevs_discovered": 3, 00:17:10.246 "num_base_bdevs_operational": 4, 00:17:10.246 "base_bdevs_list": [ 00:17:10.246 { 00:17:10.246 "name": "BaseBdev1", 00:17:10.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.246 "is_configured": false, 00:17:10.246 "data_offset": 0, 00:17:10.246 "data_size": 0 00:17:10.246 }, 00:17:10.246 { 00:17:10.246 "name": "BaseBdev2", 00:17:10.246 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:10.246 "is_configured": true, 00:17:10.246 "data_offset": 0, 00:17:10.246 "data_size": 65536 00:17:10.246 }, 00:17:10.246 { 00:17:10.246 "name": "BaseBdev3", 00:17:10.246 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:10.246 "is_configured": true, 00:17:10.246 "data_offset": 0, 00:17:10.246 "data_size": 65536 00:17:10.246 }, 00:17:10.246 { 00:17:10.246 "name": "BaseBdev4", 00:17:10.246 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:10.246 "is_configured": true, 00:17:10.246 "data_offset": 0, 00:17:10.246 "data_size": 65536 00:17:10.246 } 00:17:10.246 ] 00:17:10.246 }' 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.246 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.813 [2024-11-19 10:11:24.798067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.813 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.813 "name": "Existed_Raid", 00:17:10.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.813 "strip_size_kb": 64, 00:17:10.813 "state": "configuring", 00:17:10.813 "raid_level": "raid5f", 00:17:10.813 "superblock": false, 00:17:10.813 "num_base_bdevs": 4, 00:17:10.813 "num_base_bdevs_discovered": 2, 00:17:10.813 "num_base_bdevs_operational": 4, 00:17:10.813 "base_bdevs_list": [ 00:17:10.813 { 00:17:10.813 "name": "BaseBdev1", 00:17:10.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.814 "is_configured": false, 00:17:10.814 "data_offset": 0, 00:17:10.814 "data_size": 0 00:17:10.814 }, 00:17:10.814 { 00:17:10.814 "name": null, 00:17:10.814 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:10.814 "is_configured": false, 00:17:10.814 "data_offset": 0, 00:17:10.814 "data_size": 65536 00:17:10.814 }, 00:17:10.814 { 00:17:10.814 "name": "BaseBdev3", 00:17:10.814 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:10.814 "is_configured": true, 00:17:10.814 "data_offset": 0, 00:17:10.814 "data_size": 65536 00:17:10.814 }, 00:17:10.814 { 00:17:10.814 "name": "BaseBdev4", 00:17:10.814 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:10.814 "is_configured": true, 00:17:10.814 "data_offset": 0, 00:17:10.814 "data_size": 65536 00:17:10.814 } 00:17:10.814 ] 00:17:10.814 }' 00:17:10.814 10:11:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.814 10:11:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 [2024-11-19 10:11:25.411984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.380 BaseBdev1 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 [ 00:17:11.380 { 00:17:11.380 "name": "BaseBdev1", 00:17:11.380 "aliases": [ 00:17:11.380 "620050a6-f3d1-4d8a-8817-1645fac1575a" 00:17:11.380 ], 00:17:11.380 "product_name": "Malloc disk", 00:17:11.380 "block_size": 512, 00:17:11.380 "num_blocks": 65536, 00:17:11.380 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:11.380 "assigned_rate_limits": { 00:17:11.380 "rw_ios_per_sec": 0, 00:17:11.380 "rw_mbytes_per_sec": 0, 00:17:11.380 "r_mbytes_per_sec": 0, 00:17:11.380 "w_mbytes_per_sec": 0 00:17:11.380 }, 00:17:11.380 "claimed": true, 00:17:11.380 "claim_type": "exclusive_write", 00:17:11.380 "zoned": false, 00:17:11.380 "supported_io_types": { 00:17:11.380 "read": true, 00:17:11.380 "write": true, 00:17:11.380 "unmap": true, 00:17:11.380 "flush": true, 00:17:11.380 "reset": true, 00:17:11.380 "nvme_admin": false, 00:17:11.380 "nvme_io": false, 00:17:11.380 "nvme_io_md": false, 00:17:11.380 "write_zeroes": true, 00:17:11.380 "zcopy": true, 00:17:11.380 "get_zone_info": false, 00:17:11.380 "zone_management": false, 00:17:11.380 "zone_append": false, 00:17:11.380 "compare": false, 00:17:11.380 "compare_and_write": false, 00:17:11.380 "abort": true, 00:17:11.380 "seek_hole": false, 00:17:11.380 "seek_data": false, 00:17:11.380 "copy": true, 00:17:11.380 "nvme_iov_md": false 00:17:11.380 }, 00:17:11.380 "memory_domains": [ 00:17:11.380 { 00:17:11.380 "dma_device_id": "system", 00:17:11.380 "dma_device_type": 1 00:17:11.380 }, 00:17:11.380 { 00:17:11.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.380 "dma_device_type": 2 00:17:11.380 } 00:17:11.380 ], 00:17:11.380 "driver_specific": {} 00:17:11.380 } 00:17:11.380 ] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.380 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.380 "name": "Existed_Raid", 00:17:11.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.380 "strip_size_kb": 64, 00:17:11.380 "state": "configuring", 00:17:11.380 "raid_level": "raid5f", 00:17:11.380 "superblock": false, 00:17:11.380 "num_base_bdevs": 4, 00:17:11.380 "num_base_bdevs_discovered": 3, 00:17:11.380 "num_base_bdevs_operational": 4, 00:17:11.380 "base_bdevs_list": [ 00:17:11.380 { 00:17:11.380 "name": "BaseBdev1", 00:17:11.380 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:11.380 "is_configured": true, 00:17:11.380 "data_offset": 0, 00:17:11.380 "data_size": 65536 00:17:11.380 }, 00:17:11.380 { 00:17:11.380 "name": null, 00:17:11.380 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:11.380 "is_configured": false, 00:17:11.380 "data_offset": 0, 00:17:11.380 "data_size": 65536 00:17:11.380 }, 00:17:11.380 { 00:17:11.380 "name": "BaseBdev3", 00:17:11.380 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:11.380 "is_configured": true, 00:17:11.380 "data_offset": 0, 00:17:11.380 "data_size": 65536 00:17:11.380 }, 00:17:11.380 { 00:17:11.380 "name": "BaseBdev4", 00:17:11.381 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:11.381 "is_configured": true, 00:17:11.381 "data_offset": 0, 00:17:11.381 "data_size": 65536 00:17:11.381 } 00:17:11.381 ] 00:17:11.381 }' 00:17:11.381 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.381 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:11.949 10:11:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.949 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.949 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 10:11:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 [2024-11-19 10:11:26.016298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.949 "name": "Existed_Raid", 00:17:11.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.949 "strip_size_kb": 64, 00:17:11.949 "state": "configuring", 00:17:11.949 "raid_level": "raid5f", 00:17:11.949 "superblock": false, 00:17:11.949 "num_base_bdevs": 4, 00:17:11.949 "num_base_bdevs_discovered": 2, 00:17:11.949 "num_base_bdevs_operational": 4, 00:17:11.949 "base_bdevs_list": [ 00:17:11.949 { 00:17:11.949 "name": "BaseBdev1", 00:17:11.949 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:11.949 "is_configured": true, 00:17:11.949 "data_offset": 0, 00:17:11.949 "data_size": 65536 00:17:11.949 }, 00:17:11.949 { 00:17:11.949 "name": null, 00:17:11.949 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:11.949 "is_configured": false, 00:17:11.949 "data_offset": 0, 00:17:11.949 "data_size": 65536 00:17:11.949 }, 00:17:11.949 { 00:17:11.949 "name": null, 00:17:11.949 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:11.949 "is_configured": false, 00:17:11.949 "data_offset": 0, 00:17:11.949 "data_size": 65536 00:17:11.949 }, 00:17:11.949 { 00:17:11.949 "name": "BaseBdev4", 00:17:11.949 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:11.949 "is_configured": true, 00:17:11.949 "data_offset": 0, 00:17:11.949 "data_size": 65536 00:17:11.949 } 00:17:11.949 ] 00:17:11.949 }' 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.949 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.557 [2024-11-19 10:11:26.596411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.557 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.557 "name": "Existed_Raid", 00:17:12.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.558 "strip_size_kb": 64, 00:17:12.558 "state": "configuring", 00:17:12.558 "raid_level": "raid5f", 00:17:12.558 "superblock": false, 00:17:12.558 "num_base_bdevs": 4, 00:17:12.558 "num_base_bdevs_discovered": 3, 00:17:12.558 "num_base_bdevs_operational": 4, 00:17:12.558 "base_bdevs_list": [ 00:17:12.558 { 00:17:12.558 "name": "BaseBdev1", 00:17:12.558 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:12.558 "is_configured": true, 00:17:12.558 "data_offset": 0, 00:17:12.558 "data_size": 65536 00:17:12.558 }, 00:17:12.558 { 00:17:12.558 "name": null, 00:17:12.558 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:12.558 "is_configured": false, 00:17:12.558 "data_offset": 0, 00:17:12.558 "data_size": 65536 00:17:12.558 }, 00:17:12.558 { 00:17:12.558 "name": "BaseBdev3", 00:17:12.558 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:12.558 "is_configured": true, 00:17:12.558 "data_offset": 0, 00:17:12.558 "data_size": 65536 00:17:12.558 }, 00:17:12.558 { 00:17:12.558 "name": "BaseBdev4", 00:17:12.558 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:12.558 "is_configured": true, 00:17:12.558 "data_offset": 0, 00:17:12.558 "data_size": 65536 00:17:12.558 } 00:17:12.558 ] 00:17:12.558 }' 00:17:12.558 10:11:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.558 10:11:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.126 [2024-11-19 10:11:27.168665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.126 "name": "Existed_Raid", 00:17:13.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.126 "strip_size_kb": 64, 00:17:13.126 "state": "configuring", 00:17:13.126 "raid_level": "raid5f", 00:17:13.126 "superblock": false, 00:17:13.126 "num_base_bdevs": 4, 00:17:13.126 "num_base_bdevs_discovered": 2, 00:17:13.126 "num_base_bdevs_operational": 4, 00:17:13.126 "base_bdevs_list": [ 00:17:13.126 { 00:17:13.126 "name": null, 00:17:13.126 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:13.126 "is_configured": false, 00:17:13.126 "data_offset": 0, 00:17:13.126 "data_size": 65536 00:17:13.126 }, 00:17:13.126 { 00:17:13.126 "name": null, 00:17:13.126 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:13.126 "is_configured": false, 00:17:13.126 "data_offset": 0, 00:17:13.126 "data_size": 65536 00:17:13.126 }, 00:17:13.126 { 00:17:13.126 "name": "BaseBdev3", 00:17:13.126 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:13.126 "is_configured": true, 00:17:13.126 "data_offset": 0, 00:17:13.126 "data_size": 65536 00:17:13.126 }, 00:17:13.126 { 00:17:13.126 "name": "BaseBdev4", 00:17:13.126 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:13.126 "is_configured": true, 00:17:13.126 "data_offset": 0, 00:17:13.126 "data_size": 65536 00:17:13.126 } 00:17:13.126 ] 00:17:13.126 }' 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.126 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.694 [2024-11-19 10:11:27.844584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.694 "name": "Existed_Raid", 00:17:13.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.694 "strip_size_kb": 64, 00:17:13.694 "state": "configuring", 00:17:13.694 "raid_level": "raid5f", 00:17:13.694 "superblock": false, 00:17:13.694 "num_base_bdevs": 4, 00:17:13.694 "num_base_bdevs_discovered": 3, 00:17:13.694 "num_base_bdevs_operational": 4, 00:17:13.694 "base_bdevs_list": [ 00:17:13.694 { 00:17:13.694 "name": null, 00:17:13.694 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:13.694 "is_configured": false, 00:17:13.694 "data_offset": 0, 00:17:13.694 "data_size": 65536 00:17:13.694 }, 00:17:13.694 { 00:17:13.694 "name": "BaseBdev2", 00:17:13.694 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:13.694 "is_configured": true, 00:17:13.694 "data_offset": 0, 00:17:13.694 "data_size": 65536 00:17:13.694 }, 00:17:13.694 { 00:17:13.694 "name": "BaseBdev3", 00:17:13.694 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:13.694 "is_configured": true, 00:17:13.694 "data_offset": 0, 00:17:13.694 "data_size": 65536 00:17:13.694 }, 00:17:13.694 { 00:17:13.694 "name": "BaseBdev4", 00:17:13.694 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:13.694 "is_configured": true, 00:17:13.694 "data_offset": 0, 00:17:13.694 "data_size": 65536 00:17:13.694 } 00:17:13.694 ] 00:17:13.694 }' 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.694 10:11:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 620050a6-f3d1-4d8a-8817-1645fac1575a 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.265 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.265 [2024-11-19 10:11:28.494649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:14.265 [2024-11-19 10:11:28.494731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:14.265 [2024-11-19 10:11:28.494744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:14.265 [2024-11-19 10:11:28.495138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:14.524 [2024-11-19 10:11:28.501679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:14.524 [2024-11-19 10:11:28.501711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:14.524 [2024-11-19 10:11:28.502070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.524 NewBaseBdev 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.524 [ 00:17:14.524 { 00:17:14.524 "name": "NewBaseBdev", 00:17:14.524 "aliases": [ 00:17:14.524 "620050a6-f3d1-4d8a-8817-1645fac1575a" 00:17:14.524 ], 00:17:14.524 "product_name": "Malloc disk", 00:17:14.524 "block_size": 512, 00:17:14.524 "num_blocks": 65536, 00:17:14.524 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:14.524 "assigned_rate_limits": { 00:17:14.524 "rw_ios_per_sec": 0, 00:17:14.524 "rw_mbytes_per_sec": 0, 00:17:14.524 "r_mbytes_per_sec": 0, 00:17:14.524 "w_mbytes_per_sec": 0 00:17:14.524 }, 00:17:14.524 "claimed": true, 00:17:14.524 "claim_type": "exclusive_write", 00:17:14.524 "zoned": false, 00:17:14.524 "supported_io_types": { 00:17:14.524 "read": true, 00:17:14.524 "write": true, 00:17:14.524 "unmap": true, 00:17:14.524 "flush": true, 00:17:14.524 "reset": true, 00:17:14.524 "nvme_admin": false, 00:17:14.524 "nvme_io": false, 00:17:14.524 "nvme_io_md": false, 00:17:14.524 "write_zeroes": true, 00:17:14.524 "zcopy": true, 00:17:14.524 "get_zone_info": false, 00:17:14.524 "zone_management": false, 00:17:14.524 "zone_append": false, 00:17:14.524 "compare": false, 00:17:14.524 "compare_and_write": false, 00:17:14.524 "abort": true, 00:17:14.524 "seek_hole": false, 00:17:14.524 "seek_data": false, 00:17:14.524 "copy": true, 00:17:14.524 "nvme_iov_md": false 00:17:14.524 }, 00:17:14.524 "memory_domains": [ 00:17:14.524 { 00:17:14.524 "dma_device_id": "system", 00:17:14.524 "dma_device_type": 1 00:17:14.524 }, 00:17:14.524 { 00:17:14.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.524 "dma_device_type": 2 00:17:14.524 } 00:17:14.524 ], 00:17:14.524 "driver_specific": {} 00:17:14.524 } 00:17:14.524 ] 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.524 "name": "Existed_Raid", 00:17:14.524 "uuid": "d60a961b-19ec-4fb9-ab2e-d292998020f9", 00:17:14.524 "strip_size_kb": 64, 00:17:14.524 "state": "online", 00:17:14.524 "raid_level": "raid5f", 00:17:14.524 "superblock": false, 00:17:14.524 "num_base_bdevs": 4, 00:17:14.524 "num_base_bdevs_discovered": 4, 00:17:14.524 "num_base_bdevs_operational": 4, 00:17:14.524 "base_bdevs_list": [ 00:17:14.524 { 00:17:14.524 "name": "NewBaseBdev", 00:17:14.524 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:14.524 "is_configured": true, 00:17:14.524 "data_offset": 0, 00:17:14.524 "data_size": 65536 00:17:14.524 }, 00:17:14.524 { 00:17:14.524 "name": "BaseBdev2", 00:17:14.524 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:14.524 "is_configured": true, 00:17:14.524 "data_offset": 0, 00:17:14.524 "data_size": 65536 00:17:14.524 }, 00:17:14.524 { 00:17:14.524 "name": "BaseBdev3", 00:17:14.524 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:14.524 "is_configured": true, 00:17:14.524 "data_offset": 0, 00:17:14.524 "data_size": 65536 00:17:14.524 }, 00:17:14.524 { 00:17:14.524 "name": "BaseBdev4", 00:17:14.524 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:14.524 "is_configured": true, 00:17:14.524 "data_offset": 0, 00:17:14.524 "data_size": 65536 00:17:14.524 } 00:17:14.524 ] 00:17:14.524 }' 00:17:14.524 10:11:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.525 10:11:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.101 [2024-11-19 10:11:29.054542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.101 "name": "Existed_Raid", 00:17:15.101 "aliases": [ 00:17:15.101 "d60a961b-19ec-4fb9-ab2e-d292998020f9" 00:17:15.101 ], 00:17:15.101 "product_name": "Raid Volume", 00:17:15.101 "block_size": 512, 00:17:15.101 "num_blocks": 196608, 00:17:15.101 "uuid": "d60a961b-19ec-4fb9-ab2e-d292998020f9", 00:17:15.101 "assigned_rate_limits": { 00:17:15.101 "rw_ios_per_sec": 0, 00:17:15.101 "rw_mbytes_per_sec": 0, 00:17:15.101 "r_mbytes_per_sec": 0, 00:17:15.101 "w_mbytes_per_sec": 0 00:17:15.101 }, 00:17:15.101 "claimed": false, 00:17:15.101 "zoned": false, 00:17:15.101 "supported_io_types": { 00:17:15.101 "read": true, 00:17:15.101 "write": true, 00:17:15.101 "unmap": false, 00:17:15.101 "flush": false, 00:17:15.101 "reset": true, 00:17:15.101 "nvme_admin": false, 00:17:15.101 "nvme_io": false, 00:17:15.101 "nvme_io_md": false, 00:17:15.101 "write_zeroes": true, 00:17:15.101 "zcopy": false, 00:17:15.101 "get_zone_info": false, 00:17:15.101 "zone_management": false, 00:17:15.101 "zone_append": false, 00:17:15.101 "compare": false, 00:17:15.101 "compare_and_write": false, 00:17:15.101 "abort": false, 00:17:15.101 "seek_hole": false, 00:17:15.101 "seek_data": false, 00:17:15.101 "copy": false, 00:17:15.101 "nvme_iov_md": false 00:17:15.101 }, 00:17:15.101 "driver_specific": { 00:17:15.101 "raid": { 00:17:15.101 "uuid": "d60a961b-19ec-4fb9-ab2e-d292998020f9", 00:17:15.101 "strip_size_kb": 64, 00:17:15.101 "state": "online", 00:17:15.101 "raid_level": "raid5f", 00:17:15.101 "superblock": false, 00:17:15.101 "num_base_bdevs": 4, 00:17:15.101 "num_base_bdevs_discovered": 4, 00:17:15.101 "num_base_bdevs_operational": 4, 00:17:15.101 "base_bdevs_list": [ 00:17:15.101 { 00:17:15.101 "name": "NewBaseBdev", 00:17:15.101 "uuid": "620050a6-f3d1-4d8a-8817-1645fac1575a", 00:17:15.101 "is_configured": true, 00:17:15.101 "data_offset": 0, 00:17:15.101 "data_size": 65536 00:17:15.101 }, 00:17:15.101 { 00:17:15.101 "name": "BaseBdev2", 00:17:15.101 "uuid": "91943832-78be-4cb9-a7cf-db460cdb4a99", 00:17:15.101 "is_configured": true, 00:17:15.101 "data_offset": 0, 00:17:15.101 "data_size": 65536 00:17:15.101 }, 00:17:15.101 { 00:17:15.101 "name": "BaseBdev3", 00:17:15.101 "uuid": "9f8d78af-d9f6-4b9f-8e9a-78408a73d70a", 00:17:15.101 "is_configured": true, 00:17:15.101 "data_offset": 0, 00:17:15.101 "data_size": 65536 00:17:15.101 }, 00:17:15.101 { 00:17:15.101 "name": "BaseBdev4", 00:17:15.101 "uuid": "e5b35c7e-d9fc-4d7a-b7f8-c2a60f344e58", 00:17:15.101 "is_configured": true, 00:17:15.101 "data_offset": 0, 00:17:15.101 "data_size": 65536 00:17:15.101 } 00:17:15.101 ] 00:17:15.101 } 00:17:15.101 } 00:17:15.101 }' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:15.101 BaseBdev2 00:17:15.101 BaseBdev3 00:17:15.101 BaseBdev4' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.101 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.102 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 [2024-11-19 10:11:29.410294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.363 [2024-11-19 10:11:29.410338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.363 [2024-11-19 10:11:29.410462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.363 [2024-11-19 10:11:29.410897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.363 [2024-11-19 10:11:29.410924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83149 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83149 ']' 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83149 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83149 00:17:15.363 killing process with pid 83149 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.363 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83149' 00:17:15.364 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83149 00:17:15.364 [2024-11-19 10:11:29.448677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.364 10:11:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83149 00:17:15.622 [2024-11-19 10:11:29.843232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.001 ************************************ 00:17:17.001 END TEST raid5f_state_function_test 00:17:17.001 ************************************ 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.001 00:17:17.001 real 0m13.144s 00:17:17.001 user 0m21.531s 00:17:17.001 sys 0m1.916s 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 10:11:31 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:17.001 10:11:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:17.001 10:11:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.001 10:11:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 ************************************ 00:17:17.001 START TEST raid5f_state_function_test_sb 00:17:17.001 ************************************ 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.001 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:17.002 Process raid pid: 83834 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83834 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83834' 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83834 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83834 ']' 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.002 10:11:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.002 [2024-11-19 10:11:31.191891] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:17.002 [2024-11-19 10:11:31.192423] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.261 [2024-11-19 10:11:31.383994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.519 [2024-11-19 10:11:31.537094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.778 [2024-11-19 10:11:31.775862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.778 [2024-11-19 10:11:31.776245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.344 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.344 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:18.344 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:18.344 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.344 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.344 [2024-11-19 10:11:32.288428] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.344 [2024-11-19 10:11:32.288500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.344 [2024-11-19 10:11:32.288519] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.345 [2024-11-19 10:11:32.288536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.345 [2024-11-19 10:11:32.288561] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.345 [2024-11-19 10:11:32.288576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.345 [2024-11-19 10:11:32.288586] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.345 [2024-11-19 10:11:32.288600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.345 "name": "Existed_Raid", 00:17:18.345 "uuid": "c9627cce-32eb-42e3-bb1c-533c97811c2b", 00:17:18.345 "strip_size_kb": 64, 00:17:18.345 "state": "configuring", 00:17:18.345 "raid_level": "raid5f", 00:17:18.345 "superblock": true, 00:17:18.345 "num_base_bdevs": 4, 00:17:18.345 "num_base_bdevs_discovered": 0, 00:17:18.345 "num_base_bdevs_operational": 4, 00:17:18.345 "base_bdevs_list": [ 00:17:18.345 { 00:17:18.345 "name": "BaseBdev1", 00:17:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.345 "is_configured": false, 00:17:18.345 "data_offset": 0, 00:17:18.345 "data_size": 0 00:17:18.345 }, 00:17:18.345 { 00:17:18.345 "name": "BaseBdev2", 00:17:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.345 "is_configured": false, 00:17:18.345 "data_offset": 0, 00:17:18.345 "data_size": 0 00:17:18.345 }, 00:17:18.345 { 00:17:18.345 "name": "BaseBdev3", 00:17:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.345 "is_configured": false, 00:17:18.345 "data_offset": 0, 00:17:18.345 "data_size": 0 00:17:18.345 }, 00:17:18.345 { 00:17:18.345 "name": "BaseBdev4", 00:17:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.345 "is_configured": false, 00:17:18.345 "data_offset": 0, 00:17:18.345 "data_size": 0 00:17:18.345 } 00:17:18.345 ] 00:17:18.345 }' 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.345 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.604 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.862 [2024-11-19 10:11:32.840511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.862 [2024-11-19 10:11:32.840575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.862 [2024-11-19 10:11:32.848493] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.862 [2024-11-19 10:11:32.848567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.862 [2024-11-19 10:11:32.848583] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.862 [2024-11-19 10:11:32.848599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.862 [2024-11-19 10:11:32.848608] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.862 [2024-11-19 10:11:32.848622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.862 [2024-11-19 10:11:32.848632] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.862 [2024-11-19 10:11:32.848663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.862 [2024-11-19 10:11:32.899417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.862 BaseBdev1 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.862 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.862 [ 00:17:18.862 { 00:17:18.862 "name": "BaseBdev1", 00:17:18.862 "aliases": [ 00:17:18.862 "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c" 00:17:18.862 ], 00:17:18.862 "product_name": "Malloc disk", 00:17:18.862 "block_size": 512, 00:17:18.862 "num_blocks": 65536, 00:17:18.862 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:18.862 "assigned_rate_limits": { 00:17:18.862 "rw_ios_per_sec": 0, 00:17:18.862 "rw_mbytes_per_sec": 0, 00:17:18.862 "r_mbytes_per_sec": 0, 00:17:18.862 "w_mbytes_per_sec": 0 00:17:18.862 }, 00:17:18.862 "claimed": true, 00:17:18.862 "claim_type": "exclusive_write", 00:17:18.862 "zoned": false, 00:17:18.862 "supported_io_types": { 00:17:18.862 "read": true, 00:17:18.862 "write": true, 00:17:18.862 "unmap": true, 00:17:18.862 "flush": true, 00:17:18.862 "reset": true, 00:17:18.862 "nvme_admin": false, 00:17:18.862 "nvme_io": false, 00:17:18.862 "nvme_io_md": false, 00:17:18.862 "write_zeroes": true, 00:17:18.862 "zcopy": true, 00:17:18.862 "get_zone_info": false, 00:17:18.862 "zone_management": false, 00:17:18.862 "zone_append": false, 00:17:18.862 "compare": false, 00:17:18.862 "compare_and_write": false, 00:17:18.862 "abort": true, 00:17:18.862 "seek_hole": false, 00:17:18.863 "seek_data": false, 00:17:18.863 "copy": true, 00:17:18.863 "nvme_iov_md": false 00:17:18.863 }, 00:17:18.863 "memory_domains": [ 00:17:18.863 { 00:17:18.863 "dma_device_id": "system", 00:17:18.863 "dma_device_type": 1 00:17:18.863 }, 00:17:18.863 { 00:17:18.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.863 "dma_device_type": 2 00:17:18.863 } 00:17:18.863 ], 00:17:18.863 "driver_specific": {} 00:17:18.863 } 00:17:18.863 ] 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.863 "name": "Existed_Raid", 00:17:18.863 "uuid": "d070df76-423b-467f-b575-fd699838b9d0", 00:17:18.863 "strip_size_kb": 64, 00:17:18.863 "state": "configuring", 00:17:18.863 "raid_level": "raid5f", 00:17:18.863 "superblock": true, 00:17:18.863 "num_base_bdevs": 4, 00:17:18.863 "num_base_bdevs_discovered": 1, 00:17:18.863 "num_base_bdevs_operational": 4, 00:17:18.863 "base_bdevs_list": [ 00:17:18.863 { 00:17:18.863 "name": "BaseBdev1", 00:17:18.863 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:18.863 "is_configured": true, 00:17:18.863 "data_offset": 2048, 00:17:18.863 "data_size": 63488 00:17:18.863 }, 00:17:18.863 { 00:17:18.863 "name": "BaseBdev2", 00:17:18.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.863 "is_configured": false, 00:17:18.863 "data_offset": 0, 00:17:18.863 "data_size": 0 00:17:18.863 }, 00:17:18.863 { 00:17:18.863 "name": "BaseBdev3", 00:17:18.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.863 "is_configured": false, 00:17:18.863 "data_offset": 0, 00:17:18.863 "data_size": 0 00:17:18.863 }, 00:17:18.863 { 00:17:18.863 "name": "BaseBdev4", 00:17:18.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.863 "is_configured": false, 00:17:18.863 "data_offset": 0, 00:17:18.863 "data_size": 0 00:17:18.863 } 00:17:18.863 ] 00:17:18.863 }' 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.863 10:11:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 [2024-11-19 10:11:33.491694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.429 [2024-11-19 10:11:33.491773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 [2024-11-19 10:11:33.503844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.429 [2024-11-19 10:11:33.506592] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.429 [2024-11-19 10:11:33.506836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.429 [2024-11-19 10:11:33.506867] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.429 [2024-11-19 10:11:33.506888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.429 [2024-11-19 10:11:33.506899] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.429 [2024-11-19 10:11:33.506914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.429 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.430 "name": "Existed_Raid", 00:17:19.430 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:19.430 "strip_size_kb": 64, 00:17:19.430 "state": "configuring", 00:17:19.430 "raid_level": "raid5f", 00:17:19.430 "superblock": true, 00:17:19.430 "num_base_bdevs": 4, 00:17:19.430 "num_base_bdevs_discovered": 1, 00:17:19.430 "num_base_bdevs_operational": 4, 00:17:19.430 "base_bdevs_list": [ 00:17:19.430 { 00:17:19.430 "name": "BaseBdev1", 00:17:19.430 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:19.430 "is_configured": true, 00:17:19.430 "data_offset": 2048, 00:17:19.430 "data_size": 63488 00:17:19.430 }, 00:17:19.430 { 00:17:19.430 "name": "BaseBdev2", 00:17:19.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.430 "is_configured": false, 00:17:19.430 "data_offset": 0, 00:17:19.430 "data_size": 0 00:17:19.430 }, 00:17:19.430 { 00:17:19.430 "name": "BaseBdev3", 00:17:19.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.430 "is_configured": false, 00:17:19.430 "data_offset": 0, 00:17:19.430 "data_size": 0 00:17:19.430 }, 00:17:19.430 { 00:17:19.430 "name": "BaseBdev4", 00:17:19.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.430 "is_configured": false, 00:17:19.430 "data_offset": 0, 00:17:19.430 "data_size": 0 00:17:19.430 } 00:17:19.430 ] 00:17:19.430 }' 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.430 10:11:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 [2024-11-19 10:11:34.102179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.040 BaseBdev2 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 [ 00:17:20.040 { 00:17:20.040 "name": "BaseBdev2", 00:17:20.040 "aliases": [ 00:17:20.040 "3403a7db-f535-4ea6-89b0-c4767fc56d89" 00:17:20.040 ], 00:17:20.040 "product_name": "Malloc disk", 00:17:20.040 "block_size": 512, 00:17:20.040 "num_blocks": 65536, 00:17:20.040 "uuid": "3403a7db-f535-4ea6-89b0-c4767fc56d89", 00:17:20.040 "assigned_rate_limits": { 00:17:20.040 "rw_ios_per_sec": 0, 00:17:20.040 "rw_mbytes_per_sec": 0, 00:17:20.040 "r_mbytes_per_sec": 0, 00:17:20.040 "w_mbytes_per_sec": 0 00:17:20.040 }, 00:17:20.040 "claimed": true, 00:17:20.040 "claim_type": "exclusive_write", 00:17:20.040 "zoned": false, 00:17:20.040 "supported_io_types": { 00:17:20.040 "read": true, 00:17:20.040 "write": true, 00:17:20.040 "unmap": true, 00:17:20.040 "flush": true, 00:17:20.040 "reset": true, 00:17:20.040 "nvme_admin": false, 00:17:20.040 "nvme_io": false, 00:17:20.040 "nvme_io_md": false, 00:17:20.040 "write_zeroes": true, 00:17:20.040 "zcopy": true, 00:17:20.040 "get_zone_info": false, 00:17:20.040 "zone_management": false, 00:17:20.040 "zone_append": false, 00:17:20.040 "compare": false, 00:17:20.040 "compare_and_write": false, 00:17:20.040 "abort": true, 00:17:20.040 "seek_hole": false, 00:17:20.040 "seek_data": false, 00:17:20.040 "copy": true, 00:17:20.040 "nvme_iov_md": false 00:17:20.040 }, 00:17:20.040 "memory_domains": [ 00:17:20.040 { 00:17:20.040 "dma_device_id": "system", 00:17:20.040 "dma_device_type": 1 00:17:20.040 }, 00:17:20.040 { 00:17:20.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.040 "dma_device_type": 2 00:17:20.040 } 00:17:20.040 ], 00:17:20.040 "driver_specific": {} 00:17:20.040 } 00:17:20.040 ] 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.040 "name": "Existed_Raid", 00:17:20.040 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:20.040 "strip_size_kb": 64, 00:17:20.040 "state": "configuring", 00:17:20.040 "raid_level": "raid5f", 00:17:20.040 "superblock": true, 00:17:20.040 "num_base_bdevs": 4, 00:17:20.040 "num_base_bdevs_discovered": 2, 00:17:20.040 "num_base_bdevs_operational": 4, 00:17:20.040 "base_bdevs_list": [ 00:17:20.040 { 00:17:20.040 "name": "BaseBdev1", 00:17:20.040 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:20.040 "is_configured": true, 00:17:20.040 "data_offset": 2048, 00:17:20.040 "data_size": 63488 00:17:20.040 }, 00:17:20.040 { 00:17:20.040 "name": "BaseBdev2", 00:17:20.040 "uuid": "3403a7db-f535-4ea6-89b0-c4767fc56d89", 00:17:20.040 "is_configured": true, 00:17:20.040 "data_offset": 2048, 00:17:20.040 "data_size": 63488 00:17:20.040 }, 00:17:20.040 { 00:17:20.040 "name": "BaseBdev3", 00:17:20.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.040 "is_configured": false, 00:17:20.040 "data_offset": 0, 00:17:20.040 "data_size": 0 00:17:20.040 }, 00:17:20.040 { 00:17:20.040 "name": "BaseBdev4", 00:17:20.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.040 "is_configured": false, 00:17:20.040 "data_offset": 0, 00:17:20.040 "data_size": 0 00:17:20.040 } 00:17:20.040 ] 00:17:20.040 }' 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.040 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.608 [2024-11-19 10:11:34.757392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.608 BaseBdev3 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.608 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.608 [ 00:17:20.608 { 00:17:20.608 "name": "BaseBdev3", 00:17:20.608 "aliases": [ 00:17:20.608 "c735b5c1-599b-4bdb-8dc9-15acb03d7077" 00:17:20.608 ], 00:17:20.608 "product_name": "Malloc disk", 00:17:20.608 "block_size": 512, 00:17:20.608 "num_blocks": 65536, 00:17:20.608 "uuid": "c735b5c1-599b-4bdb-8dc9-15acb03d7077", 00:17:20.608 "assigned_rate_limits": { 00:17:20.608 "rw_ios_per_sec": 0, 00:17:20.608 "rw_mbytes_per_sec": 0, 00:17:20.608 "r_mbytes_per_sec": 0, 00:17:20.608 "w_mbytes_per_sec": 0 00:17:20.608 }, 00:17:20.608 "claimed": true, 00:17:20.608 "claim_type": "exclusive_write", 00:17:20.608 "zoned": false, 00:17:20.608 "supported_io_types": { 00:17:20.608 "read": true, 00:17:20.608 "write": true, 00:17:20.608 "unmap": true, 00:17:20.608 "flush": true, 00:17:20.608 "reset": true, 00:17:20.608 "nvme_admin": false, 00:17:20.608 "nvme_io": false, 00:17:20.608 "nvme_io_md": false, 00:17:20.608 "write_zeroes": true, 00:17:20.608 "zcopy": true, 00:17:20.608 "get_zone_info": false, 00:17:20.608 "zone_management": false, 00:17:20.608 "zone_append": false, 00:17:20.609 "compare": false, 00:17:20.609 "compare_and_write": false, 00:17:20.609 "abort": true, 00:17:20.609 "seek_hole": false, 00:17:20.609 "seek_data": false, 00:17:20.609 "copy": true, 00:17:20.609 "nvme_iov_md": false 00:17:20.609 }, 00:17:20.609 "memory_domains": [ 00:17:20.609 { 00:17:20.609 "dma_device_id": "system", 00:17:20.609 "dma_device_type": 1 00:17:20.609 }, 00:17:20.609 { 00:17:20.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.609 "dma_device_type": 2 00:17:20.609 } 00:17:20.609 ], 00:17:20.609 "driver_specific": {} 00:17:20.609 } 00:17:20.609 ] 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.609 "name": "Existed_Raid", 00:17:20.609 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:20.609 "strip_size_kb": 64, 00:17:20.609 "state": "configuring", 00:17:20.609 "raid_level": "raid5f", 00:17:20.609 "superblock": true, 00:17:20.609 "num_base_bdevs": 4, 00:17:20.609 "num_base_bdevs_discovered": 3, 00:17:20.609 "num_base_bdevs_operational": 4, 00:17:20.609 "base_bdevs_list": [ 00:17:20.609 { 00:17:20.609 "name": "BaseBdev1", 00:17:20.609 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:20.609 "is_configured": true, 00:17:20.609 "data_offset": 2048, 00:17:20.609 "data_size": 63488 00:17:20.609 }, 00:17:20.609 { 00:17:20.609 "name": "BaseBdev2", 00:17:20.609 "uuid": "3403a7db-f535-4ea6-89b0-c4767fc56d89", 00:17:20.609 "is_configured": true, 00:17:20.609 "data_offset": 2048, 00:17:20.609 "data_size": 63488 00:17:20.609 }, 00:17:20.609 { 00:17:20.609 "name": "BaseBdev3", 00:17:20.609 "uuid": "c735b5c1-599b-4bdb-8dc9-15acb03d7077", 00:17:20.609 "is_configured": true, 00:17:20.609 "data_offset": 2048, 00:17:20.609 "data_size": 63488 00:17:20.609 }, 00:17:20.609 { 00:17:20.609 "name": "BaseBdev4", 00:17:20.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.609 "is_configured": false, 00:17:20.609 "data_offset": 0, 00:17:20.609 "data_size": 0 00:17:20.609 } 00:17:20.609 ] 00:17:20.609 }' 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.609 10:11:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.177 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:21.177 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.177 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.178 [2024-11-19 10:11:35.352563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:21.178 [2024-11-19 10:11:35.353295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:21.178 [2024-11-19 10:11:35.353322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:21.178 BaseBdev4 00:17:21.178 [2024-11-19 10:11:35.353677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.178 [2024-11-19 10:11:35.360745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:21.178 [2024-11-19 10:11:35.360964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:21.178 [2024-11-19 10:11:35.361390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.178 [ 00:17:21.178 { 00:17:21.178 "name": "BaseBdev4", 00:17:21.178 "aliases": [ 00:17:21.178 "be0ffb60-e77a-468b-b47d-ef4ea77bc41e" 00:17:21.178 ], 00:17:21.178 "product_name": "Malloc disk", 00:17:21.178 "block_size": 512, 00:17:21.178 "num_blocks": 65536, 00:17:21.178 "uuid": "be0ffb60-e77a-468b-b47d-ef4ea77bc41e", 00:17:21.178 "assigned_rate_limits": { 00:17:21.178 "rw_ios_per_sec": 0, 00:17:21.178 "rw_mbytes_per_sec": 0, 00:17:21.178 "r_mbytes_per_sec": 0, 00:17:21.178 "w_mbytes_per_sec": 0 00:17:21.178 }, 00:17:21.178 "claimed": true, 00:17:21.178 "claim_type": "exclusive_write", 00:17:21.178 "zoned": false, 00:17:21.178 "supported_io_types": { 00:17:21.178 "read": true, 00:17:21.178 "write": true, 00:17:21.178 "unmap": true, 00:17:21.178 "flush": true, 00:17:21.178 "reset": true, 00:17:21.178 "nvme_admin": false, 00:17:21.178 "nvme_io": false, 00:17:21.178 "nvme_io_md": false, 00:17:21.178 "write_zeroes": true, 00:17:21.178 "zcopy": true, 00:17:21.178 "get_zone_info": false, 00:17:21.178 "zone_management": false, 00:17:21.178 "zone_append": false, 00:17:21.178 "compare": false, 00:17:21.178 "compare_and_write": false, 00:17:21.178 "abort": true, 00:17:21.178 "seek_hole": false, 00:17:21.178 "seek_data": false, 00:17:21.178 "copy": true, 00:17:21.178 "nvme_iov_md": false 00:17:21.178 }, 00:17:21.178 "memory_domains": [ 00:17:21.178 { 00:17:21.178 "dma_device_id": "system", 00:17:21.178 "dma_device_type": 1 00:17:21.178 }, 00:17:21.178 { 00:17:21.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.178 "dma_device_type": 2 00:17:21.178 } 00:17:21.178 ], 00:17:21.178 "driver_specific": {} 00:17:21.178 } 00:17:21.178 ] 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.178 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.437 "name": "Existed_Raid", 00:17:21.437 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:21.437 "strip_size_kb": 64, 00:17:21.437 "state": "online", 00:17:21.437 "raid_level": "raid5f", 00:17:21.437 "superblock": true, 00:17:21.437 "num_base_bdevs": 4, 00:17:21.437 "num_base_bdevs_discovered": 4, 00:17:21.437 "num_base_bdevs_operational": 4, 00:17:21.437 "base_bdevs_list": [ 00:17:21.437 { 00:17:21.437 "name": "BaseBdev1", 00:17:21.437 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:21.437 "is_configured": true, 00:17:21.437 "data_offset": 2048, 00:17:21.437 "data_size": 63488 00:17:21.437 }, 00:17:21.437 { 00:17:21.437 "name": "BaseBdev2", 00:17:21.437 "uuid": "3403a7db-f535-4ea6-89b0-c4767fc56d89", 00:17:21.437 "is_configured": true, 00:17:21.437 "data_offset": 2048, 00:17:21.437 "data_size": 63488 00:17:21.437 }, 00:17:21.437 { 00:17:21.437 "name": "BaseBdev3", 00:17:21.437 "uuid": "c735b5c1-599b-4bdb-8dc9-15acb03d7077", 00:17:21.437 "is_configured": true, 00:17:21.437 "data_offset": 2048, 00:17:21.437 "data_size": 63488 00:17:21.437 }, 00:17:21.437 { 00:17:21.437 "name": "BaseBdev4", 00:17:21.437 "uuid": "be0ffb60-e77a-468b-b47d-ef4ea77bc41e", 00:17:21.437 "is_configured": true, 00:17:21.437 "data_offset": 2048, 00:17:21.437 "data_size": 63488 00:17:21.437 } 00:17:21.437 ] 00:17:21.437 }' 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.437 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:22.006 [2024-11-19 10:11:35.941940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:22.006 "name": "Existed_Raid", 00:17:22.006 "aliases": [ 00:17:22.006 "875da5bf-8eee-4b27-9624-8e00f0a02d13" 00:17:22.006 ], 00:17:22.006 "product_name": "Raid Volume", 00:17:22.006 "block_size": 512, 00:17:22.006 "num_blocks": 190464, 00:17:22.006 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:22.006 "assigned_rate_limits": { 00:17:22.006 "rw_ios_per_sec": 0, 00:17:22.006 "rw_mbytes_per_sec": 0, 00:17:22.006 "r_mbytes_per_sec": 0, 00:17:22.006 "w_mbytes_per_sec": 0 00:17:22.006 }, 00:17:22.006 "claimed": false, 00:17:22.006 "zoned": false, 00:17:22.006 "supported_io_types": { 00:17:22.006 "read": true, 00:17:22.006 "write": true, 00:17:22.006 "unmap": false, 00:17:22.006 "flush": false, 00:17:22.006 "reset": true, 00:17:22.006 "nvme_admin": false, 00:17:22.006 "nvme_io": false, 00:17:22.006 "nvme_io_md": false, 00:17:22.006 "write_zeroes": true, 00:17:22.006 "zcopy": false, 00:17:22.006 "get_zone_info": false, 00:17:22.006 "zone_management": false, 00:17:22.006 "zone_append": false, 00:17:22.006 "compare": false, 00:17:22.006 "compare_and_write": false, 00:17:22.006 "abort": false, 00:17:22.006 "seek_hole": false, 00:17:22.006 "seek_data": false, 00:17:22.006 "copy": false, 00:17:22.006 "nvme_iov_md": false 00:17:22.006 }, 00:17:22.006 "driver_specific": { 00:17:22.006 "raid": { 00:17:22.006 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:22.006 "strip_size_kb": 64, 00:17:22.006 "state": "online", 00:17:22.006 "raid_level": "raid5f", 00:17:22.006 "superblock": true, 00:17:22.006 "num_base_bdevs": 4, 00:17:22.006 "num_base_bdevs_discovered": 4, 00:17:22.006 "num_base_bdevs_operational": 4, 00:17:22.006 "base_bdevs_list": [ 00:17:22.006 { 00:17:22.006 "name": "BaseBdev1", 00:17:22.006 "uuid": "a2a31e32-8b49-4a0a-b2d1-cfb8cc22f36c", 00:17:22.006 "is_configured": true, 00:17:22.006 "data_offset": 2048, 00:17:22.006 "data_size": 63488 00:17:22.006 }, 00:17:22.006 { 00:17:22.006 "name": "BaseBdev2", 00:17:22.006 "uuid": "3403a7db-f535-4ea6-89b0-c4767fc56d89", 00:17:22.006 "is_configured": true, 00:17:22.006 "data_offset": 2048, 00:17:22.006 "data_size": 63488 00:17:22.006 }, 00:17:22.006 { 00:17:22.006 "name": "BaseBdev3", 00:17:22.006 "uuid": "c735b5c1-599b-4bdb-8dc9-15acb03d7077", 00:17:22.006 "is_configured": true, 00:17:22.006 "data_offset": 2048, 00:17:22.006 "data_size": 63488 00:17:22.006 }, 00:17:22.006 { 00:17:22.006 "name": "BaseBdev4", 00:17:22.006 "uuid": "be0ffb60-e77a-468b-b47d-ef4ea77bc41e", 00:17:22.006 "is_configured": true, 00:17:22.006 "data_offset": 2048, 00:17:22.006 "data_size": 63488 00:17:22.006 } 00:17:22.006 ] 00:17:22.006 } 00:17:22.006 } 00:17:22.006 }' 00:17:22.006 10:11:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:22.006 BaseBdev2 00:17:22.006 BaseBdev3 00:17:22.006 BaseBdev4' 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.006 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.266 [2024-11-19 10:11:36.325876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.266 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.267 "name": "Existed_Raid", 00:17:22.267 "uuid": "875da5bf-8eee-4b27-9624-8e00f0a02d13", 00:17:22.267 "strip_size_kb": 64, 00:17:22.267 "state": "online", 00:17:22.267 "raid_level": "raid5f", 00:17:22.267 "superblock": true, 00:17:22.267 "num_base_bdevs": 4, 00:17:22.267 "num_base_bdevs_discovered": 3, 00:17:22.267 "num_base_bdevs_operational": 3, 00:17:22.267 "base_bdevs_list": [ 00:17:22.267 { 00:17:22.267 "name": null, 00:17:22.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.267 "is_configured": false, 00:17:22.267 "data_offset": 0, 00:17:22.267 "data_size": 63488 00:17:22.267 }, 00:17:22.267 { 00:17:22.267 "name": "BaseBdev2", 00:17:22.267 "uuid": "3403a7db-f535-4ea6-89b0-c4767fc56d89", 00:17:22.267 "is_configured": true, 00:17:22.267 "data_offset": 2048, 00:17:22.267 "data_size": 63488 00:17:22.267 }, 00:17:22.267 { 00:17:22.267 "name": "BaseBdev3", 00:17:22.267 "uuid": "c735b5c1-599b-4bdb-8dc9-15acb03d7077", 00:17:22.267 "is_configured": true, 00:17:22.267 "data_offset": 2048, 00:17:22.267 "data_size": 63488 00:17:22.267 }, 00:17:22.267 { 00:17:22.267 "name": "BaseBdev4", 00:17:22.267 "uuid": "be0ffb60-e77a-468b-b47d-ef4ea77bc41e", 00:17:22.267 "is_configured": true, 00:17:22.267 "data_offset": 2048, 00:17:22.267 "data_size": 63488 00:17:22.267 } 00:17:22.267 ] 00:17:22.267 }' 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.267 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.839 10:11:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.839 [2024-11-19 10:11:36.992037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:22.839 [2024-11-19 10:11:36.992304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.098 [2024-11-19 10:11:37.087428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.098 [2024-11-19 10:11:37.147491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.098 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.098 [2024-11-19 10:11:37.313837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:23.098 [2024-11-19 10:11:37.314163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.357 BaseBdev2 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.357 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.357 [ 00:17:23.357 { 00:17:23.357 "name": "BaseBdev2", 00:17:23.357 "aliases": [ 00:17:23.357 "f5b06423-5889-4396-937f-a849c42dc33e" 00:17:23.357 ], 00:17:23.357 "product_name": "Malloc disk", 00:17:23.357 "block_size": 512, 00:17:23.357 "num_blocks": 65536, 00:17:23.357 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:23.357 "assigned_rate_limits": { 00:17:23.357 "rw_ios_per_sec": 0, 00:17:23.357 "rw_mbytes_per_sec": 0, 00:17:23.357 "r_mbytes_per_sec": 0, 00:17:23.357 "w_mbytes_per_sec": 0 00:17:23.357 }, 00:17:23.357 "claimed": false, 00:17:23.357 "zoned": false, 00:17:23.357 "supported_io_types": { 00:17:23.357 "read": true, 00:17:23.357 "write": true, 00:17:23.357 "unmap": true, 00:17:23.357 "flush": true, 00:17:23.357 "reset": true, 00:17:23.357 "nvme_admin": false, 00:17:23.358 "nvme_io": false, 00:17:23.358 "nvme_io_md": false, 00:17:23.358 "write_zeroes": true, 00:17:23.358 "zcopy": true, 00:17:23.358 "get_zone_info": false, 00:17:23.358 "zone_management": false, 00:17:23.358 "zone_append": false, 00:17:23.358 "compare": false, 00:17:23.358 "compare_and_write": false, 00:17:23.358 "abort": true, 00:17:23.358 "seek_hole": false, 00:17:23.358 "seek_data": false, 00:17:23.358 "copy": true, 00:17:23.358 "nvme_iov_md": false 00:17:23.358 }, 00:17:23.358 "memory_domains": [ 00:17:23.358 { 00:17:23.358 "dma_device_id": "system", 00:17:23.358 "dma_device_type": 1 00:17:23.358 }, 00:17:23.358 { 00:17:23.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.358 "dma_device_type": 2 00:17:23.358 } 00:17:23.358 ], 00:17:23.358 "driver_specific": {} 00:17:23.358 } 00:17:23.358 ] 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.358 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.618 BaseBdev3 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.618 [ 00:17:23.618 { 00:17:23.618 "name": "BaseBdev3", 00:17:23.618 "aliases": [ 00:17:23.618 "7a248b6c-853c-438c-a0f5-9ec889cb3933" 00:17:23.618 ], 00:17:23.618 "product_name": "Malloc disk", 00:17:23.618 "block_size": 512, 00:17:23.618 "num_blocks": 65536, 00:17:23.618 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:23.618 "assigned_rate_limits": { 00:17:23.618 "rw_ios_per_sec": 0, 00:17:23.618 "rw_mbytes_per_sec": 0, 00:17:23.618 "r_mbytes_per_sec": 0, 00:17:23.618 "w_mbytes_per_sec": 0 00:17:23.618 }, 00:17:23.618 "claimed": false, 00:17:23.618 "zoned": false, 00:17:23.618 "supported_io_types": { 00:17:23.618 "read": true, 00:17:23.618 "write": true, 00:17:23.618 "unmap": true, 00:17:23.618 "flush": true, 00:17:23.618 "reset": true, 00:17:23.618 "nvme_admin": false, 00:17:23.618 "nvme_io": false, 00:17:23.618 "nvme_io_md": false, 00:17:23.618 "write_zeroes": true, 00:17:23.618 "zcopy": true, 00:17:23.618 "get_zone_info": false, 00:17:23.618 "zone_management": false, 00:17:23.618 "zone_append": false, 00:17:23.618 "compare": false, 00:17:23.618 "compare_and_write": false, 00:17:23.618 "abort": true, 00:17:23.618 "seek_hole": false, 00:17:23.618 "seek_data": false, 00:17:23.618 "copy": true, 00:17:23.618 "nvme_iov_md": false 00:17:23.618 }, 00:17:23.618 "memory_domains": [ 00:17:23.618 { 00:17:23.618 "dma_device_id": "system", 00:17:23.618 "dma_device_type": 1 00:17:23.618 }, 00:17:23.618 { 00:17:23.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.618 "dma_device_type": 2 00:17:23.618 } 00:17:23.618 ], 00:17:23.618 "driver_specific": {} 00:17:23.618 } 00:17:23.618 ] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.618 BaseBdev4 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.618 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.618 [ 00:17:23.618 { 00:17:23.618 "name": "BaseBdev4", 00:17:23.618 "aliases": [ 00:17:23.619 "8e4768a2-bf37-47cf-846f-8ed69c5771f6" 00:17:23.619 ], 00:17:23.619 "product_name": "Malloc disk", 00:17:23.619 "block_size": 512, 00:17:23.619 "num_blocks": 65536, 00:17:23.619 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:23.619 "assigned_rate_limits": { 00:17:23.619 "rw_ios_per_sec": 0, 00:17:23.619 "rw_mbytes_per_sec": 0, 00:17:23.619 "r_mbytes_per_sec": 0, 00:17:23.619 "w_mbytes_per_sec": 0 00:17:23.619 }, 00:17:23.619 "claimed": false, 00:17:23.619 "zoned": false, 00:17:23.619 "supported_io_types": { 00:17:23.619 "read": true, 00:17:23.619 "write": true, 00:17:23.619 "unmap": true, 00:17:23.619 "flush": true, 00:17:23.619 "reset": true, 00:17:23.619 "nvme_admin": false, 00:17:23.619 "nvme_io": false, 00:17:23.619 "nvme_io_md": false, 00:17:23.619 "write_zeroes": true, 00:17:23.619 "zcopy": true, 00:17:23.619 "get_zone_info": false, 00:17:23.619 "zone_management": false, 00:17:23.619 "zone_append": false, 00:17:23.619 "compare": false, 00:17:23.619 "compare_and_write": false, 00:17:23.619 "abort": true, 00:17:23.619 "seek_hole": false, 00:17:23.619 "seek_data": false, 00:17:23.619 "copy": true, 00:17:23.619 "nvme_iov_md": false 00:17:23.619 }, 00:17:23.619 "memory_domains": [ 00:17:23.619 { 00:17:23.619 "dma_device_id": "system", 00:17:23.619 "dma_device_type": 1 00:17:23.619 }, 00:17:23.619 { 00:17:23.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.619 "dma_device_type": 2 00:17:23.619 } 00:17:23.619 ], 00:17:23.619 "driver_specific": {} 00:17:23.619 } 00:17:23.619 ] 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.619 [2024-11-19 10:11:37.710751] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.619 [2024-11-19 10:11:37.710837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.619 [2024-11-19 10:11:37.710884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.619 [2024-11-19 10:11:37.713628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.619 [2024-11-19 10:11:37.713873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.619 "name": "Existed_Raid", 00:17:23.619 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:23.619 "strip_size_kb": 64, 00:17:23.619 "state": "configuring", 00:17:23.619 "raid_level": "raid5f", 00:17:23.619 "superblock": true, 00:17:23.619 "num_base_bdevs": 4, 00:17:23.619 "num_base_bdevs_discovered": 3, 00:17:23.619 "num_base_bdevs_operational": 4, 00:17:23.619 "base_bdevs_list": [ 00:17:23.619 { 00:17:23.619 "name": "BaseBdev1", 00:17:23.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.619 "is_configured": false, 00:17:23.619 "data_offset": 0, 00:17:23.619 "data_size": 0 00:17:23.619 }, 00:17:23.619 { 00:17:23.619 "name": "BaseBdev2", 00:17:23.619 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:23.619 "is_configured": true, 00:17:23.619 "data_offset": 2048, 00:17:23.619 "data_size": 63488 00:17:23.619 }, 00:17:23.619 { 00:17:23.619 "name": "BaseBdev3", 00:17:23.619 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:23.619 "is_configured": true, 00:17:23.619 "data_offset": 2048, 00:17:23.619 "data_size": 63488 00:17:23.619 }, 00:17:23.619 { 00:17:23.619 "name": "BaseBdev4", 00:17:23.619 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:23.619 "is_configured": true, 00:17:23.619 "data_offset": 2048, 00:17:23.619 "data_size": 63488 00:17:23.619 } 00:17:23.619 ] 00:17:23.619 }' 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.619 10:11:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.187 [2024-11-19 10:11:38.270902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.187 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.187 "name": "Existed_Raid", 00:17:24.187 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:24.187 "strip_size_kb": 64, 00:17:24.187 "state": "configuring", 00:17:24.187 "raid_level": "raid5f", 00:17:24.187 "superblock": true, 00:17:24.187 "num_base_bdevs": 4, 00:17:24.187 "num_base_bdevs_discovered": 2, 00:17:24.187 "num_base_bdevs_operational": 4, 00:17:24.187 "base_bdevs_list": [ 00:17:24.187 { 00:17:24.187 "name": "BaseBdev1", 00:17:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.187 "is_configured": false, 00:17:24.187 "data_offset": 0, 00:17:24.187 "data_size": 0 00:17:24.187 }, 00:17:24.187 { 00:17:24.187 "name": null, 00:17:24.187 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:24.187 "is_configured": false, 00:17:24.187 "data_offset": 0, 00:17:24.187 "data_size": 63488 00:17:24.187 }, 00:17:24.188 { 00:17:24.188 "name": "BaseBdev3", 00:17:24.188 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:24.188 "is_configured": true, 00:17:24.188 "data_offset": 2048, 00:17:24.188 "data_size": 63488 00:17:24.188 }, 00:17:24.188 { 00:17:24.188 "name": "BaseBdev4", 00:17:24.188 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:24.188 "is_configured": true, 00:17:24.188 "data_offset": 2048, 00:17:24.188 "data_size": 63488 00:17:24.188 } 00:17:24.188 ] 00:17:24.188 }' 00:17:24.188 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.188 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.755 [2024-11-19 10:11:38.901208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.755 BaseBdev1 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.755 [ 00:17:24.755 { 00:17:24.755 "name": "BaseBdev1", 00:17:24.755 "aliases": [ 00:17:24.755 "9936b4bd-c7b8-47cd-9bae-776af5305f7b" 00:17:24.755 ], 00:17:24.755 "product_name": "Malloc disk", 00:17:24.755 "block_size": 512, 00:17:24.755 "num_blocks": 65536, 00:17:24.755 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:24.755 "assigned_rate_limits": { 00:17:24.755 "rw_ios_per_sec": 0, 00:17:24.755 "rw_mbytes_per_sec": 0, 00:17:24.755 "r_mbytes_per_sec": 0, 00:17:24.755 "w_mbytes_per_sec": 0 00:17:24.755 }, 00:17:24.755 "claimed": true, 00:17:24.755 "claim_type": "exclusive_write", 00:17:24.755 "zoned": false, 00:17:24.755 "supported_io_types": { 00:17:24.755 "read": true, 00:17:24.755 "write": true, 00:17:24.755 "unmap": true, 00:17:24.755 "flush": true, 00:17:24.755 "reset": true, 00:17:24.755 "nvme_admin": false, 00:17:24.755 "nvme_io": false, 00:17:24.755 "nvme_io_md": false, 00:17:24.755 "write_zeroes": true, 00:17:24.755 "zcopy": true, 00:17:24.755 "get_zone_info": false, 00:17:24.755 "zone_management": false, 00:17:24.755 "zone_append": false, 00:17:24.755 "compare": false, 00:17:24.755 "compare_and_write": false, 00:17:24.755 "abort": true, 00:17:24.755 "seek_hole": false, 00:17:24.755 "seek_data": false, 00:17:24.755 "copy": true, 00:17:24.755 "nvme_iov_md": false 00:17:24.755 }, 00:17:24.755 "memory_domains": [ 00:17:24.755 { 00:17:24.755 "dma_device_id": "system", 00:17:24.755 "dma_device_type": 1 00:17:24.755 }, 00:17:24.755 { 00:17:24.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.755 "dma_device_type": 2 00:17:24.755 } 00:17:24.755 ], 00:17:24.755 "driver_specific": {} 00:17:24.755 } 00:17:24.755 ] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.755 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.015 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.015 "name": "Existed_Raid", 00:17:25.015 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:25.015 "strip_size_kb": 64, 00:17:25.015 "state": "configuring", 00:17:25.015 "raid_level": "raid5f", 00:17:25.015 "superblock": true, 00:17:25.015 "num_base_bdevs": 4, 00:17:25.015 "num_base_bdevs_discovered": 3, 00:17:25.015 "num_base_bdevs_operational": 4, 00:17:25.015 "base_bdevs_list": [ 00:17:25.015 { 00:17:25.015 "name": "BaseBdev1", 00:17:25.015 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:25.015 "is_configured": true, 00:17:25.015 "data_offset": 2048, 00:17:25.015 "data_size": 63488 00:17:25.015 }, 00:17:25.015 { 00:17:25.015 "name": null, 00:17:25.015 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:25.015 "is_configured": false, 00:17:25.015 "data_offset": 0, 00:17:25.015 "data_size": 63488 00:17:25.015 }, 00:17:25.015 { 00:17:25.015 "name": "BaseBdev3", 00:17:25.015 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:25.015 "is_configured": true, 00:17:25.015 "data_offset": 2048, 00:17:25.015 "data_size": 63488 00:17:25.015 }, 00:17:25.015 { 00:17:25.015 "name": "BaseBdev4", 00:17:25.015 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:25.015 "is_configured": true, 00:17:25.015 "data_offset": 2048, 00:17:25.015 "data_size": 63488 00:17:25.015 } 00:17:25.015 ] 00:17:25.015 }' 00:17:25.015 10:11:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.015 10:11:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.275 [2024-11-19 10:11:39.485501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.275 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.534 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.534 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.534 "name": "Existed_Raid", 00:17:25.534 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:25.534 "strip_size_kb": 64, 00:17:25.534 "state": "configuring", 00:17:25.534 "raid_level": "raid5f", 00:17:25.534 "superblock": true, 00:17:25.534 "num_base_bdevs": 4, 00:17:25.534 "num_base_bdevs_discovered": 2, 00:17:25.534 "num_base_bdevs_operational": 4, 00:17:25.534 "base_bdevs_list": [ 00:17:25.534 { 00:17:25.534 "name": "BaseBdev1", 00:17:25.534 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:25.534 "is_configured": true, 00:17:25.534 "data_offset": 2048, 00:17:25.534 "data_size": 63488 00:17:25.534 }, 00:17:25.534 { 00:17:25.534 "name": null, 00:17:25.534 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:25.534 "is_configured": false, 00:17:25.534 "data_offset": 0, 00:17:25.534 "data_size": 63488 00:17:25.534 }, 00:17:25.534 { 00:17:25.534 "name": null, 00:17:25.534 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:25.534 "is_configured": false, 00:17:25.534 "data_offset": 0, 00:17:25.534 "data_size": 63488 00:17:25.534 }, 00:17:25.534 { 00:17:25.534 "name": "BaseBdev4", 00:17:25.534 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:25.534 "is_configured": true, 00:17:25.534 "data_offset": 2048, 00:17:25.534 "data_size": 63488 00:17:25.534 } 00:17:25.534 ] 00:17:25.534 }' 00:17:25.534 10:11:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.534 10:11:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 [2024-11-19 10:11:40.089653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.103 "name": "Existed_Raid", 00:17:26.103 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:26.103 "strip_size_kb": 64, 00:17:26.103 "state": "configuring", 00:17:26.103 "raid_level": "raid5f", 00:17:26.103 "superblock": true, 00:17:26.103 "num_base_bdevs": 4, 00:17:26.103 "num_base_bdevs_discovered": 3, 00:17:26.103 "num_base_bdevs_operational": 4, 00:17:26.103 "base_bdevs_list": [ 00:17:26.103 { 00:17:26.103 "name": "BaseBdev1", 00:17:26.103 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:26.103 "is_configured": true, 00:17:26.103 "data_offset": 2048, 00:17:26.103 "data_size": 63488 00:17:26.103 }, 00:17:26.103 { 00:17:26.103 "name": null, 00:17:26.103 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:26.103 "is_configured": false, 00:17:26.103 "data_offset": 0, 00:17:26.103 "data_size": 63488 00:17:26.103 }, 00:17:26.103 { 00:17:26.103 "name": "BaseBdev3", 00:17:26.103 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:26.103 "is_configured": true, 00:17:26.103 "data_offset": 2048, 00:17:26.103 "data_size": 63488 00:17:26.103 }, 00:17:26.103 { 00:17:26.103 "name": "BaseBdev4", 00:17:26.103 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:26.103 "is_configured": true, 00:17:26.103 "data_offset": 2048, 00:17:26.103 "data_size": 63488 00:17:26.103 } 00:17:26.103 ] 00:17:26.103 }' 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.103 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.672 [2024-11-19 10:11:40.681873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.672 "name": "Existed_Raid", 00:17:26.672 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:26.672 "strip_size_kb": 64, 00:17:26.672 "state": "configuring", 00:17:26.672 "raid_level": "raid5f", 00:17:26.672 "superblock": true, 00:17:26.672 "num_base_bdevs": 4, 00:17:26.672 "num_base_bdevs_discovered": 2, 00:17:26.672 "num_base_bdevs_operational": 4, 00:17:26.672 "base_bdevs_list": [ 00:17:26.672 { 00:17:26.672 "name": null, 00:17:26.672 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:26.672 "is_configured": false, 00:17:26.672 "data_offset": 0, 00:17:26.672 "data_size": 63488 00:17:26.672 }, 00:17:26.672 { 00:17:26.672 "name": null, 00:17:26.672 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:26.672 "is_configured": false, 00:17:26.672 "data_offset": 0, 00:17:26.672 "data_size": 63488 00:17:26.672 }, 00:17:26.672 { 00:17:26.672 "name": "BaseBdev3", 00:17:26.672 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:26.672 "is_configured": true, 00:17:26.672 "data_offset": 2048, 00:17:26.672 "data_size": 63488 00:17:26.672 }, 00:17:26.672 { 00:17:26.672 "name": "BaseBdev4", 00:17:26.672 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:26.672 "is_configured": true, 00:17:26.672 "data_offset": 2048, 00:17:26.672 "data_size": 63488 00:17:26.672 } 00:17:26.672 ] 00:17:26.672 }' 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.672 10:11:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.240 [2024-11-19 10:11:41.383361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.240 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.241 "name": "Existed_Raid", 00:17:27.241 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:27.241 "strip_size_kb": 64, 00:17:27.241 "state": "configuring", 00:17:27.241 "raid_level": "raid5f", 00:17:27.241 "superblock": true, 00:17:27.241 "num_base_bdevs": 4, 00:17:27.241 "num_base_bdevs_discovered": 3, 00:17:27.241 "num_base_bdevs_operational": 4, 00:17:27.241 "base_bdevs_list": [ 00:17:27.241 { 00:17:27.241 "name": null, 00:17:27.241 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:27.241 "is_configured": false, 00:17:27.241 "data_offset": 0, 00:17:27.241 "data_size": 63488 00:17:27.241 }, 00:17:27.241 { 00:17:27.241 "name": "BaseBdev2", 00:17:27.241 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:27.241 "is_configured": true, 00:17:27.241 "data_offset": 2048, 00:17:27.241 "data_size": 63488 00:17:27.241 }, 00:17:27.241 { 00:17:27.241 "name": "BaseBdev3", 00:17:27.241 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:27.241 "is_configured": true, 00:17:27.241 "data_offset": 2048, 00:17:27.241 "data_size": 63488 00:17:27.241 }, 00:17:27.241 { 00:17:27.241 "name": "BaseBdev4", 00:17:27.241 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:27.241 "is_configured": true, 00:17:27.241 "data_offset": 2048, 00:17:27.241 "data_size": 63488 00:17:27.241 } 00:17:27.241 ] 00:17:27.241 }' 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.241 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.807 10:11:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:27.807 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.807 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9936b4bd-c7b8-47cd-9bae-776af5305f7b 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.066 [2024-11-19 10:11:42.092958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:28.066 [2024-11-19 10:11:42.093355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:28.066 [2024-11-19 10:11:42.093375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:28.066 NewBaseBdev 00:17:28.066 [2024-11-19 10:11:42.093719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.066 [2024-11-19 10:11:42.100361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:28.066 [2024-11-19 10:11:42.100406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:28.066 [2024-11-19 10:11:42.100771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.066 [ 00:17:28.066 { 00:17:28.066 "name": "NewBaseBdev", 00:17:28.066 "aliases": [ 00:17:28.066 "9936b4bd-c7b8-47cd-9bae-776af5305f7b" 00:17:28.066 ], 00:17:28.066 "product_name": "Malloc disk", 00:17:28.066 "block_size": 512, 00:17:28.066 "num_blocks": 65536, 00:17:28.066 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:28.066 "assigned_rate_limits": { 00:17:28.066 "rw_ios_per_sec": 0, 00:17:28.066 "rw_mbytes_per_sec": 0, 00:17:28.066 "r_mbytes_per_sec": 0, 00:17:28.066 "w_mbytes_per_sec": 0 00:17:28.066 }, 00:17:28.066 "claimed": true, 00:17:28.066 "claim_type": "exclusive_write", 00:17:28.066 "zoned": false, 00:17:28.066 "supported_io_types": { 00:17:28.066 "read": true, 00:17:28.066 "write": true, 00:17:28.066 "unmap": true, 00:17:28.066 "flush": true, 00:17:28.066 "reset": true, 00:17:28.066 "nvme_admin": false, 00:17:28.066 "nvme_io": false, 00:17:28.066 "nvme_io_md": false, 00:17:28.066 "write_zeroes": true, 00:17:28.066 "zcopy": true, 00:17:28.066 "get_zone_info": false, 00:17:28.066 "zone_management": false, 00:17:28.066 "zone_append": false, 00:17:28.066 "compare": false, 00:17:28.066 "compare_and_write": false, 00:17:28.066 "abort": true, 00:17:28.066 "seek_hole": false, 00:17:28.066 "seek_data": false, 00:17:28.066 "copy": true, 00:17:28.066 "nvme_iov_md": false 00:17:28.066 }, 00:17:28.066 "memory_domains": [ 00:17:28.066 { 00:17:28.066 "dma_device_id": "system", 00:17:28.066 "dma_device_type": 1 00:17:28.066 }, 00:17:28.066 { 00:17:28.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.066 "dma_device_type": 2 00:17:28.066 } 00:17:28.066 ], 00:17:28.066 "driver_specific": {} 00:17:28.066 } 00:17:28.066 ] 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.066 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.067 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.067 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.067 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.067 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.067 "name": "Existed_Raid", 00:17:28.067 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:28.067 "strip_size_kb": 64, 00:17:28.067 "state": "online", 00:17:28.067 "raid_level": "raid5f", 00:17:28.067 "superblock": true, 00:17:28.067 "num_base_bdevs": 4, 00:17:28.067 "num_base_bdevs_discovered": 4, 00:17:28.067 "num_base_bdevs_operational": 4, 00:17:28.067 "base_bdevs_list": [ 00:17:28.067 { 00:17:28.067 "name": "NewBaseBdev", 00:17:28.067 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:28.067 "is_configured": true, 00:17:28.067 "data_offset": 2048, 00:17:28.067 "data_size": 63488 00:17:28.067 }, 00:17:28.067 { 00:17:28.067 "name": "BaseBdev2", 00:17:28.067 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:28.067 "is_configured": true, 00:17:28.067 "data_offset": 2048, 00:17:28.067 "data_size": 63488 00:17:28.067 }, 00:17:28.067 { 00:17:28.067 "name": "BaseBdev3", 00:17:28.067 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:28.067 "is_configured": true, 00:17:28.067 "data_offset": 2048, 00:17:28.067 "data_size": 63488 00:17:28.067 }, 00:17:28.067 { 00:17:28.067 "name": "BaseBdev4", 00:17:28.067 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:28.067 "is_configured": true, 00:17:28.067 "data_offset": 2048, 00:17:28.067 "data_size": 63488 00:17:28.067 } 00:17:28.067 ] 00:17:28.067 }' 00:17:28.067 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.067 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.634 [2024-11-19 10:11:42.625485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.634 "name": "Existed_Raid", 00:17:28.634 "aliases": [ 00:17:28.634 "ee55d7f3-9860-48ca-840d-d7af206426e9" 00:17:28.634 ], 00:17:28.634 "product_name": "Raid Volume", 00:17:28.634 "block_size": 512, 00:17:28.634 "num_blocks": 190464, 00:17:28.634 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:28.634 "assigned_rate_limits": { 00:17:28.634 "rw_ios_per_sec": 0, 00:17:28.634 "rw_mbytes_per_sec": 0, 00:17:28.634 "r_mbytes_per_sec": 0, 00:17:28.634 "w_mbytes_per_sec": 0 00:17:28.634 }, 00:17:28.634 "claimed": false, 00:17:28.634 "zoned": false, 00:17:28.634 "supported_io_types": { 00:17:28.634 "read": true, 00:17:28.634 "write": true, 00:17:28.634 "unmap": false, 00:17:28.634 "flush": false, 00:17:28.634 "reset": true, 00:17:28.634 "nvme_admin": false, 00:17:28.634 "nvme_io": false, 00:17:28.634 "nvme_io_md": false, 00:17:28.634 "write_zeroes": true, 00:17:28.634 "zcopy": false, 00:17:28.634 "get_zone_info": false, 00:17:28.634 "zone_management": false, 00:17:28.634 "zone_append": false, 00:17:28.634 "compare": false, 00:17:28.634 "compare_and_write": false, 00:17:28.634 "abort": false, 00:17:28.634 "seek_hole": false, 00:17:28.634 "seek_data": false, 00:17:28.634 "copy": false, 00:17:28.634 "nvme_iov_md": false 00:17:28.634 }, 00:17:28.634 "driver_specific": { 00:17:28.634 "raid": { 00:17:28.634 "uuid": "ee55d7f3-9860-48ca-840d-d7af206426e9", 00:17:28.634 "strip_size_kb": 64, 00:17:28.634 "state": "online", 00:17:28.634 "raid_level": "raid5f", 00:17:28.634 "superblock": true, 00:17:28.634 "num_base_bdevs": 4, 00:17:28.634 "num_base_bdevs_discovered": 4, 00:17:28.634 "num_base_bdevs_operational": 4, 00:17:28.634 "base_bdevs_list": [ 00:17:28.634 { 00:17:28.634 "name": "NewBaseBdev", 00:17:28.634 "uuid": "9936b4bd-c7b8-47cd-9bae-776af5305f7b", 00:17:28.634 "is_configured": true, 00:17:28.634 "data_offset": 2048, 00:17:28.634 "data_size": 63488 00:17:28.634 }, 00:17:28.634 { 00:17:28.634 "name": "BaseBdev2", 00:17:28.634 "uuid": "f5b06423-5889-4396-937f-a849c42dc33e", 00:17:28.634 "is_configured": true, 00:17:28.634 "data_offset": 2048, 00:17:28.634 "data_size": 63488 00:17:28.634 }, 00:17:28.634 { 00:17:28.634 "name": "BaseBdev3", 00:17:28.634 "uuid": "7a248b6c-853c-438c-a0f5-9ec889cb3933", 00:17:28.634 "is_configured": true, 00:17:28.634 "data_offset": 2048, 00:17:28.634 "data_size": 63488 00:17:28.634 }, 00:17:28.634 { 00:17:28.634 "name": "BaseBdev4", 00:17:28.634 "uuid": "8e4768a2-bf37-47cf-846f-8ed69c5771f6", 00:17:28.634 "is_configured": true, 00:17:28.634 "data_offset": 2048, 00:17:28.634 "data_size": 63488 00:17:28.634 } 00:17:28.634 ] 00:17:28.634 } 00:17:28.634 } 00:17:28.634 }' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:28.634 BaseBdev2 00:17:28.634 BaseBdev3 00:17:28.634 BaseBdev4' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.634 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 [2024-11-19 10:11:42.981254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.894 [2024-11-19 10:11:42.981300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.895 [2024-11-19 10:11:42.981423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.895 [2024-11-19 10:11:42.981892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.895 [2024-11-19 10:11:42.981915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83834 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83834 ']' 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83834 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.895 10:11:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83834 00:17:28.895 killing process with pid 83834 00:17:28.895 10:11:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.895 10:11:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.895 10:11:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83834' 00:17:28.895 10:11:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83834 00:17:28.895 [2024-11-19 10:11:43.024329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.895 10:11:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83834 00:17:29.470 [2024-11-19 10:11:43.414760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.405 10:11:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:30.405 00:17:30.405 real 0m13.489s 00:17:30.405 user 0m22.098s 00:17:30.405 sys 0m2.017s 00:17:30.405 10:11:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.405 10:11:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.405 ************************************ 00:17:30.405 END TEST raid5f_state_function_test_sb 00:17:30.405 ************************************ 00:17:30.405 10:11:44 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:30.405 10:11:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:30.405 10:11:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.405 10:11:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.405 ************************************ 00:17:30.405 START TEST raid5f_superblock_test 00:17:30.405 ************************************ 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84516 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84516 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84516 ']' 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.405 10:11:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.664 [2024-11-19 10:11:44.711335] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:30.664 [2024-11-19 10:11:44.711514] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84516 ] 00:17:30.664 [2024-11-19 10:11:44.890693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.923 [2024-11-19 10:11:45.044005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.181 [2024-11-19 10:11:45.278802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.181 [2024-11-19 10:11:45.278939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.749 malloc1 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.749 [2024-11-19 10:11:45.849924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.749 [2024-11-19 10:11:45.850010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.749 [2024-11-19 10:11:45.850049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:31.749 [2024-11-19 10:11:45.850066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.749 [2024-11-19 10:11:45.853147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.749 [2024-11-19 10:11:45.853200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.749 pt1 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.749 malloc2 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.749 [2024-11-19 10:11:45.911623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.749 [2024-11-19 10:11:45.911692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.749 [2024-11-19 10:11:45.911724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.749 [2024-11-19 10:11:45.911739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.749 [2024-11-19 10:11:45.915106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.749 [2024-11-19 10:11:45.915152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.749 pt2 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.749 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 malloc3 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 [2024-11-19 10:11:45.990388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:32.008 [2024-11-19 10:11:45.990475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.008 [2024-11-19 10:11:45.990511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:32.008 [2024-11-19 10:11:45.990527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.008 [2024-11-19 10:11:45.993664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.008 [2024-11-19 10:11:45.993879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:32.008 pt3 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.008 10:11:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 malloc4 00:17:32.008 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.008 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:32.008 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.009 [2024-11-19 10:11:46.051934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:32.009 [2024-11-19 10:11:46.052024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.009 [2024-11-19 10:11:46.052057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:32.009 [2024-11-19 10:11:46.052074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.009 [2024-11-19 10:11:46.055226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.009 [2024-11-19 10:11:46.055274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:32.009 pt4 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.009 [2024-11-19 10:11:46.064113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.009 [2024-11-19 10:11:46.066765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.009 [2024-11-19 10:11:46.067052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:32.009 [2024-11-19 10:11:46.067166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:32.009 [2024-11-19 10:11:46.067455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:32.009 [2024-11-19 10:11:46.067480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:32.009 [2024-11-19 10:11:46.067826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:32.009 [2024-11-19 10:11:46.074834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:32.009 [2024-11-19 10:11:46.074875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:32.009 [2024-11-19 10:11:46.075149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.009 "name": "raid_bdev1", 00:17:32.009 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:32.009 "strip_size_kb": 64, 00:17:32.009 "state": "online", 00:17:32.009 "raid_level": "raid5f", 00:17:32.009 "superblock": true, 00:17:32.009 "num_base_bdevs": 4, 00:17:32.009 "num_base_bdevs_discovered": 4, 00:17:32.009 "num_base_bdevs_operational": 4, 00:17:32.009 "base_bdevs_list": [ 00:17:32.009 { 00:17:32.009 "name": "pt1", 00:17:32.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.009 "is_configured": true, 00:17:32.009 "data_offset": 2048, 00:17:32.009 "data_size": 63488 00:17:32.009 }, 00:17:32.009 { 00:17:32.009 "name": "pt2", 00:17:32.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.009 "is_configured": true, 00:17:32.009 "data_offset": 2048, 00:17:32.009 "data_size": 63488 00:17:32.009 }, 00:17:32.009 { 00:17:32.009 "name": "pt3", 00:17:32.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.009 "is_configured": true, 00:17:32.009 "data_offset": 2048, 00:17:32.009 "data_size": 63488 00:17:32.009 }, 00:17:32.009 { 00:17:32.009 "name": "pt4", 00:17:32.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:32.009 "is_configured": true, 00:17:32.009 "data_offset": 2048, 00:17:32.009 "data_size": 63488 00:17:32.009 } 00:17:32.009 ] 00:17:32.009 }' 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.009 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.577 [2024-11-19 10:11:46.623734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.577 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.577 "name": "raid_bdev1", 00:17:32.577 "aliases": [ 00:17:32.577 "53ee63f2-da92-4e5e-bff1-c921603cbedc" 00:17:32.577 ], 00:17:32.577 "product_name": "Raid Volume", 00:17:32.577 "block_size": 512, 00:17:32.577 "num_blocks": 190464, 00:17:32.577 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:32.577 "assigned_rate_limits": { 00:17:32.577 "rw_ios_per_sec": 0, 00:17:32.577 "rw_mbytes_per_sec": 0, 00:17:32.577 "r_mbytes_per_sec": 0, 00:17:32.577 "w_mbytes_per_sec": 0 00:17:32.577 }, 00:17:32.577 "claimed": false, 00:17:32.577 "zoned": false, 00:17:32.577 "supported_io_types": { 00:17:32.577 "read": true, 00:17:32.577 "write": true, 00:17:32.577 "unmap": false, 00:17:32.577 "flush": false, 00:17:32.577 "reset": true, 00:17:32.577 "nvme_admin": false, 00:17:32.577 "nvme_io": false, 00:17:32.577 "nvme_io_md": false, 00:17:32.577 "write_zeroes": true, 00:17:32.577 "zcopy": false, 00:17:32.577 "get_zone_info": false, 00:17:32.577 "zone_management": false, 00:17:32.577 "zone_append": false, 00:17:32.577 "compare": false, 00:17:32.577 "compare_and_write": false, 00:17:32.577 "abort": false, 00:17:32.577 "seek_hole": false, 00:17:32.577 "seek_data": false, 00:17:32.577 "copy": false, 00:17:32.577 "nvme_iov_md": false 00:17:32.577 }, 00:17:32.577 "driver_specific": { 00:17:32.577 "raid": { 00:17:32.577 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:32.577 "strip_size_kb": 64, 00:17:32.577 "state": "online", 00:17:32.577 "raid_level": "raid5f", 00:17:32.577 "superblock": true, 00:17:32.577 "num_base_bdevs": 4, 00:17:32.577 "num_base_bdevs_discovered": 4, 00:17:32.577 "num_base_bdevs_operational": 4, 00:17:32.577 "base_bdevs_list": [ 00:17:32.577 { 00:17:32.577 "name": "pt1", 00:17:32.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.577 "is_configured": true, 00:17:32.577 "data_offset": 2048, 00:17:32.577 "data_size": 63488 00:17:32.577 }, 00:17:32.577 { 00:17:32.577 "name": "pt2", 00:17:32.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.577 "is_configured": true, 00:17:32.577 "data_offset": 2048, 00:17:32.577 "data_size": 63488 00:17:32.577 }, 00:17:32.577 { 00:17:32.577 "name": "pt3", 00:17:32.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.577 "is_configured": true, 00:17:32.577 "data_offset": 2048, 00:17:32.577 "data_size": 63488 00:17:32.577 }, 00:17:32.577 { 00:17:32.577 "name": "pt4", 00:17:32.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:32.577 "is_configured": true, 00:17:32.577 "data_offset": 2048, 00:17:32.577 "data_size": 63488 00:17:32.578 } 00:17:32.578 ] 00:17:32.578 } 00:17:32.578 } 00:17:32.578 }' 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:32.578 pt2 00:17:32.578 pt3 00:17:32.578 pt4' 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.578 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.837 10:11:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:32.837 [2024-11-19 10:11:46.999778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.837 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.837 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=53ee63f2-da92-4e5e-bff1-c921603cbedc 00:17:32.837 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 53ee63f2-da92-4e5e-bff1-c921603cbedc ']' 00:17:32.837 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.837 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.837 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.837 [2024-11-19 10:11:47.063558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.837 [2024-11-19 10:11:47.063606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.837 [2024-11-19 10:11:47.063731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.837 [2024-11-19 10:11:47.063891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.837 [2024-11-19 10:11:47.063916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.096 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 [2024-11-19 10:11:47.223652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:33.097 [2024-11-19 10:11:47.226526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:33.097 [2024-11-19 10:11:47.226725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:33.097 [2024-11-19 10:11:47.226847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:33.097 [2024-11-19 10:11:47.227059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:33.097 [2024-11-19 10:11:47.227310] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:33.097 [2024-11-19 10:11:47.227508] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:33.097 [2024-11-19 10:11:47.227684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:33.097 [2024-11-19 10:11:47.227859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.097 [2024-11-19 10:11:47.227913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:33.097 request: 00:17:33.097 { 00:17:33.097 "name": "raid_bdev1", 00:17:33.097 "raid_level": "raid5f", 00:17:33.097 "base_bdevs": [ 00:17:33.097 "malloc1", 00:17:33.097 "malloc2", 00:17:33.097 "malloc3", 00:17:33.097 "malloc4" 00:17:33.097 ], 00:17:33.097 "strip_size_kb": 64, 00:17:33.097 "superblock": false, 00:17:33.097 "method": "bdev_raid_create", 00:17:33.097 "req_id": 1 00:17:33.097 } 00:17:33.097 Got JSON-RPC error response 00:17:33.097 response: 00:17:33.097 { 00:17:33.097 "code": -17, 00:17:33.097 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:33.097 } 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 [2024-11-19 10:11:47.296410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.097 [2024-11-19 10:11:47.296508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.097 [2024-11-19 10:11:47.296540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:33.097 [2024-11-19 10:11:47.296560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.097 [2024-11-19 10:11:47.299703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.097 [2024-11-19 10:11:47.299758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.097 [2024-11-19 10:11:47.299904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.097 [2024-11-19 10:11:47.299992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.097 pt1 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.356 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.356 "name": "raid_bdev1", 00:17:33.356 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:33.356 "strip_size_kb": 64, 00:17:33.356 "state": "configuring", 00:17:33.356 "raid_level": "raid5f", 00:17:33.356 "superblock": true, 00:17:33.356 "num_base_bdevs": 4, 00:17:33.356 "num_base_bdevs_discovered": 1, 00:17:33.356 "num_base_bdevs_operational": 4, 00:17:33.356 "base_bdevs_list": [ 00:17:33.356 { 00:17:33.356 "name": "pt1", 00:17:33.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.356 "is_configured": true, 00:17:33.356 "data_offset": 2048, 00:17:33.356 "data_size": 63488 00:17:33.356 }, 00:17:33.356 { 00:17:33.356 "name": null, 00:17:33.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.356 "is_configured": false, 00:17:33.356 "data_offset": 2048, 00:17:33.356 "data_size": 63488 00:17:33.356 }, 00:17:33.356 { 00:17:33.356 "name": null, 00:17:33.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.356 "is_configured": false, 00:17:33.356 "data_offset": 2048, 00:17:33.356 "data_size": 63488 00:17:33.356 }, 00:17:33.356 { 00:17:33.356 "name": null, 00:17:33.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.356 "is_configured": false, 00:17:33.356 "data_offset": 2048, 00:17:33.356 "data_size": 63488 00:17:33.356 } 00:17:33.356 ] 00:17:33.356 }' 00:17:33.356 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.356 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.924 [2024-11-19 10:11:47.852594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.924 [2024-11-19 10:11:47.852704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.924 [2024-11-19 10:11:47.852738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:33.924 [2024-11-19 10:11:47.852758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.924 [2024-11-19 10:11:47.853400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.924 [2024-11-19 10:11:47.853441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.924 [2024-11-19 10:11:47.853560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:33.924 [2024-11-19 10:11:47.853601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.924 pt2 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.924 [2024-11-19 10:11:47.860617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.924 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.925 "name": "raid_bdev1", 00:17:33.925 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:33.925 "strip_size_kb": 64, 00:17:33.925 "state": "configuring", 00:17:33.925 "raid_level": "raid5f", 00:17:33.925 "superblock": true, 00:17:33.925 "num_base_bdevs": 4, 00:17:33.925 "num_base_bdevs_discovered": 1, 00:17:33.925 "num_base_bdevs_operational": 4, 00:17:33.925 "base_bdevs_list": [ 00:17:33.925 { 00:17:33.925 "name": "pt1", 00:17:33.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.925 "is_configured": true, 00:17:33.925 "data_offset": 2048, 00:17:33.925 "data_size": 63488 00:17:33.925 }, 00:17:33.925 { 00:17:33.925 "name": null, 00:17:33.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.925 "is_configured": false, 00:17:33.925 "data_offset": 0, 00:17:33.925 "data_size": 63488 00:17:33.925 }, 00:17:33.925 { 00:17:33.925 "name": null, 00:17:33.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.925 "is_configured": false, 00:17:33.925 "data_offset": 2048, 00:17:33.925 "data_size": 63488 00:17:33.925 }, 00:17:33.925 { 00:17:33.925 "name": null, 00:17:33.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.925 "is_configured": false, 00:17:33.925 "data_offset": 2048, 00:17:33.925 "data_size": 63488 00:17:33.925 } 00:17:33.925 ] 00:17:33.925 }' 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.925 10:11:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.184 [2024-11-19 10:11:48.368743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.184 [2024-11-19 10:11:48.369019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.184 [2024-11-19 10:11:48.369076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:34.184 [2024-11-19 10:11:48.369094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.184 [2024-11-19 10:11:48.369750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.184 [2024-11-19 10:11:48.369777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.184 [2024-11-19 10:11:48.369920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.184 [2024-11-19 10:11:48.369956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.184 pt2 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.184 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.185 [2024-11-19 10:11:48.376668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:34.185 [2024-11-19 10:11:48.376736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.185 [2024-11-19 10:11:48.376764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:34.185 [2024-11-19 10:11:48.376777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.185 [2024-11-19 10:11:48.377275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.185 [2024-11-19 10:11:48.377307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:34.185 [2024-11-19 10:11:48.377391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:34.185 [2024-11-19 10:11:48.377438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:34.185 pt3 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.185 [2024-11-19 10:11:48.384635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:34.185 [2024-11-19 10:11:48.384707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.185 [2024-11-19 10:11:48.384751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:34.185 [2024-11-19 10:11:48.384765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.185 [2024-11-19 10:11:48.385269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.185 [2024-11-19 10:11:48.385312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:34.185 [2024-11-19 10:11:48.385400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:34.185 [2024-11-19 10:11:48.385429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:34.185 [2024-11-19 10:11:48.385611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:34.185 [2024-11-19 10:11:48.385628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:34.185 [2024-11-19 10:11:48.385987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:34.185 [2024-11-19 10:11:48.392533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:34.185 [2024-11-19 10:11:48.392567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:34.185 [2024-11-19 10:11:48.392830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.185 pt4 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.185 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.444 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.444 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.444 "name": "raid_bdev1", 00:17:34.444 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:34.444 "strip_size_kb": 64, 00:17:34.444 "state": "online", 00:17:34.444 "raid_level": "raid5f", 00:17:34.444 "superblock": true, 00:17:34.444 "num_base_bdevs": 4, 00:17:34.444 "num_base_bdevs_discovered": 4, 00:17:34.444 "num_base_bdevs_operational": 4, 00:17:34.444 "base_bdevs_list": [ 00:17:34.444 { 00:17:34.444 "name": "pt1", 00:17:34.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.444 "is_configured": true, 00:17:34.444 "data_offset": 2048, 00:17:34.444 "data_size": 63488 00:17:34.444 }, 00:17:34.444 { 00:17:34.444 "name": "pt2", 00:17:34.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.444 "is_configured": true, 00:17:34.444 "data_offset": 2048, 00:17:34.444 "data_size": 63488 00:17:34.444 }, 00:17:34.444 { 00:17:34.444 "name": "pt3", 00:17:34.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.444 "is_configured": true, 00:17:34.444 "data_offset": 2048, 00:17:34.444 "data_size": 63488 00:17:34.444 }, 00:17:34.444 { 00:17:34.444 "name": "pt4", 00:17:34.444 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:34.444 "is_configured": true, 00:17:34.444 "data_offset": 2048, 00:17:34.444 "data_size": 63488 00:17:34.444 } 00:17:34.444 ] 00:17:34.444 }' 00:17:34.444 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.444 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 [2024-11-19 10:11:48.917405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.703 10:11:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.962 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.962 "name": "raid_bdev1", 00:17:34.962 "aliases": [ 00:17:34.962 "53ee63f2-da92-4e5e-bff1-c921603cbedc" 00:17:34.962 ], 00:17:34.962 "product_name": "Raid Volume", 00:17:34.962 "block_size": 512, 00:17:34.962 "num_blocks": 190464, 00:17:34.962 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:34.962 "assigned_rate_limits": { 00:17:34.962 "rw_ios_per_sec": 0, 00:17:34.962 "rw_mbytes_per_sec": 0, 00:17:34.962 "r_mbytes_per_sec": 0, 00:17:34.962 "w_mbytes_per_sec": 0 00:17:34.962 }, 00:17:34.962 "claimed": false, 00:17:34.962 "zoned": false, 00:17:34.962 "supported_io_types": { 00:17:34.962 "read": true, 00:17:34.962 "write": true, 00:17:34.962 "unmap": false, 00:17:34.962 "flush": false, 00:17:34.962 "reset": true, 00:17:34.962 "nvme_admin": false, 00:17:34.962 "nvme_io": false, 00:17:34.962 "nvme_io_md": false, 00:17:34.962 "write_zeroes": true, 00:17:34.962 "zcopy": false, 00:17:34.962 "get_zone_info": false, 00:17:34.962 "zone_management": false, 00:17:34.962 "zone_append": false, 00:17:34.962 "compare": false, 00:17:34.962 "compare_and_write": false, 00:17:34.962 "abort": false, 00:17:34.962 "seek_hole": false, 00:17:34.962 "seek_data": false, 00:17:34.962 "copy": false, 00:17:34.962 "nvme_iov_md": false 00:17:34.962 }, 00:17:34.962 "driver_specific": { 00:17:34.962 "raid": { 00:17:34.962 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:34.962 "strip_size_kb": 64, 00:17:34.962 "state": "online", 00:17:34.962 "raid_level": "raid5f", 00:17:34.962 "superblock": true, 00:17:34.962 "num_base_bdevs": 4, 00:17:34.962 "num_base_bdevs_discovered": 4, 00:17:34.962 "num_base_bdevs_operational": 4, 00:17:34.962 "base_bdevs_list": [ 00:17:34.962 { 00:17:34.962 "name": "pt1", 00:17:34.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.962 "is_configured": true, 00:17:34.962 "data_offset": 2048, 00:17:34.962 "data_size": 63488 00:17:34.962 }, 00:17:34.962 { 00:17:34.962 "name": "pt2", 00:17:34.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.962 "is_configured": true, 00:17:34.962 "data_offset": 2048, 00:17:34.962 "data_size": 63488 00:17:34.962 }, 00:17:34.962 { 00:17:34.962 "name": "pt3", 00:17:34.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.962 "is_configured": true, 00:17:34.962 "data_offset": 2048, 00:17:34.962 "data_size": 63488 00:17:34.962 }, 00:17:34.962 { 00:17:34.962 "name": "pt4", 00:17:34.962 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:34.962 "is_configured": true, 00:17:34.962 "data_offset": 2048, 00:17:34.962 "data_size": 63488 00:17:34.962 } 00:17:34.962 ] 00:17:34.962 } 00:17:34.962 } 00:17:34.962 }' 00:17:34.962 10:11:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.962 pt2 00:17:34.962 pt3 00:17:34.962 pt4' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.962 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:35.221 [2024-11-19 10:11:49.289423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 53ee63f2-da92-4e5e-bff1-c921603cbedc '!=' 53ee63f2-da92-4e5e-bff1-c921603cbedc ']' 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.221 [2024-11-19 10:11:49.341298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.221 "name": "raid_bdev1", 00:17:35.221 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:35.221 "strip_size_kb": 64, 00:17:35.221 "state": "online", 00:17:35.221 "raid_level": "raid5f", 00:17:35.221 "superblock": true, 00:17:35.221 "num_base_bdevs": 4, 00:17:35.221 "num_base_bdevs_discovered": 3, 00:17:35.221 "num_base_bdevs_operational": 3, 00:17:35.221 "base_bdevs_list": [ 00:17:35.221 { 00:17:35.221 "name": null, 00:17:35.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.221 "is_configured": false, 00:17:35.221 "data_offset": 0, 00:17:35.221 "data_size": 63488 00:17:35.221 }, 00:17:35.221 { 00:17:35.221 "name": "pt2", 00:17:35.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.221 "is_configured": true, 00:17:35.221 "data_offset": 2048, 00:17:35.221 "data_size": 63488 00:17:35.221 }, 00:17:35.221 { 00:17:35.221 "name": "pt3", 00:17:35.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.221 "is_configured": true, 00:17:35.221 "data_offset": 2048, 00:17:35.221 "data_size": 63488 00:17:35.221 }, 00:17:35.221 { 00:17:35.221 "name": "pt4", 00:17:35.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:35.221 "is_configured": true, 00:17:35.221 "data_offset": 2048, 00:17:35.221 "data_size": 63488 00:17:35.221 } 00:17:35.221 ] 00:17:35.221 }' 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.221 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 [2024-11-19 10:11:49.873356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.787 [2024-11-19 10:11:49.873402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.787 [2024-11-19 10:11:49.873525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.787 [2024-11-19 10:11:49.873645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.787 [2024-11-19 10:11:49.873663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 [2024-11-19 10:11:49.961347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.787 [2024-11-19 10:11:49.961426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.787 [2024-11-19 10:11:49.961459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:35.787 [2024-11-19 10:11:49.961474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.787 [2024-11-19 10:11:49.964723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.787 [2024-11-19 10:11:49.964927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.787 [2024-11-19 10:11:49.965073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.787 [2024-11-19 10:11:49.965145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.787 pt2 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.787 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.788 10:11:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.045 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.045 "name": "raid_bdev1", 00:17:36.045 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:36.045 "strip_size_kb": 64, 00:17:36.045 "state": "configuring", 00:17:36.045 "raid_level": "raid5f", 00:17:36.045 "superblock": true, 00:17:36.045 "num_base_bdevs": 4, 00:17:36.045 "num_base_bdevs_discovered": 1, 00:17:36.045 "num_base_bdevs_operational": 3, 00:17:36.045 "base_bdevs_list": [ 00:17:36.045 { 00:17:36.045 "name": null, 00:17:36.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.045 "is_configured": false, 00:17:36.045 "data_offset": 2048, 00:17:36.045 "data_size": 63488 00:17:36.045 }, 00:17:36.045 { 00:17:36.045 "name": "pt2", 00:17:36.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.045 "is_configured": true, 00:17:36.045 "data_offset": 2048, 00:17:36.045 "data_size": 63488 00:17:36.045 }, 00:17:36.045 { 00:17:36.045 "name": null, 00:17:36.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.045 "is_configured": false, 00:17:36.045 "data_offset": 2048, 00:17:36.045 "data_size": 63488 00:17:36.045 }, 00:17:36.045 { 00:17:36.045 "name": null, 00:17:36.045 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:36.045 "is_configured": false, 00:17:36.046 "data_offset": 2048, 00:17:36.046 "data_size": 63488 00:17:36.046 } 00:17:36.046 ] 00:17:36.046 }' 00:17:36.046 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.046 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.303 [2024-11-19 10:11:50.489552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.303 [2024-11-19 10:11:50.489652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.303 [2024-11-19 10:11:50.489690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:36.303 [2024-11-19 10:11:50.489705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.303 [2024-11-19 10:11:50.490389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.303 [2024-11-19 10:11:50.490601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.303 [2024-11-19 10:11:50.490753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:36.303 [2024-11-19 10:11:50.490821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.303 pt3 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.303 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.561 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.561 "name": "raid_bdev1", 00:17:36.561 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:36.561 "strip_size_kb": 64, 00:17:36.561 "state": "configuring", 00:17:36.561 "raid_level": "raid5f", 00:17:36.561 "superblock": true, 00:17:36.561 "num_base_bdevs": 4, 00:17:36.561 "num_base_bdevs_discovered": 2, 00:17:36.561 "num_base_bdevs_operational": 3, 00:17:36.562 "base_bdevs_list": [ 00:17:36.562 { 00:17:36.562 "name": null, 00:17:36.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.562 "is_configured": false, 00:17:36.562 "data_offset": 2048, 00:17:36.562 "data_size": 63488 00:17:36.562 }, 00:17:36.562 { 00:17:36.562 "name": "pt2", 00:17:36.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.562 "is_configured": true, 00:17:36.562 "data_offset": 2048, 00:17:36.562 "data_size": 63488 00:17:36.562 }, 00:17:36.562 { 00:17:36.562 "name": "pt3", 00:17:36.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.562 "is_configured": true, 00:17:36.562 "data_offset": 2048, 00:17:36.562 "data_size": 63488 00:17:36.562 }, 00:17:36.562 { 00:17:36.562 "name": null, 00:17:36.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:36.562 "is_configured": false, 00:17:36.562 "data_offset": 2048, 00:17:36.562 "data_size": 63488 00:17:36.562 } 00:17:36.562 ] 00:17:36.562 }' 00:17:36.562 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.562 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.821 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:36.821 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.821 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:36.821 10:11:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:36.821 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.821 10:11:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.821 [2024-11-19 10:11:51.001750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:36.821 [2024-11-19 10:11:51.002004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.821 [2024-11-19 10:11:51.002073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:36.821 [2024-11-19 10:11:51.002091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.821 [2024-11-19 10:11:51.002735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.821 [2024-11-19 10:11:51.002773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:36.821 [2024-11-19 10:11:51.002927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:36.821 [2024-11-19 10:11:51.002964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:36.821 [2024-11-19 10:11:51.003148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:36.821 [2024-11-19 10:11:51.003166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:36.821 [2024-11-19 10:11:51.003487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:36.821 [2024-11-19 10:11:51.010345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:36.821 [2024-11-19 10:11:51.010396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:36.821 [2024-11-19 10:11:51.010836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.821 pt4 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.821 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.079 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.079 "name": "raid_bdev1", 00:17:37.079 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:37.079 "strip_size_kb": 64, 00:17:37.079 "state": "online", 00:17:37.079 "raid_level": "raid5f", 00:17:37.079 "superblock": true, 00:17:37.079 "num_base_bdevs": 4, 00:17:37.079 "num_base_bdevs_discovered": 3, 00:17:37.079 "num_base_bdevs_operational": 3, 00:17:37.079 "base_bdevs_list": [ 00:17:37.079 { 00:17:37.079 "name": null, 00:17:37.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.079 "is_configured": false, 00:17:37.079 "data_offset": 2048, 00:17:37.079 "data_size": 63488 00:17:37.079 }, 00:17:37.079 { 00:17:37.079 "name": "pt2", 00:17:37.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.079 "is_configured": true, 00:17:37.079 "data_offset": 2048, 00:17:37.079 "data_size": 63488 00:17:37.079 }, 00:17:37.079 { 00:17:37.079 "name": "pt3", 00:17:37.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:37.079 "is_configured": true, 00:17:37.079 "data_offset": 2048, 00:17:37.079 "data_size": 63488 00:17:37.079 }, 00:17:37.079 { 00:17:37.079 "name": "pt4", 00:17:37.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:37.079 "is_configured": true, 00:17:37.079 "data_offset": 2048, 00:17:37.079 "data_size": 63488 00:17:37.079 } 00:17:37.079 ] 00:17:37.079 }' 00:17:37.079 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.079 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.338 [2024-11-19 10:11:51.535163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.338 [2024-11-19 10:11:51.535201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.338 [2024-11-19 10:11:51.535318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.338 [2024-11-19 10:11:51.535450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.338 [2024-11-19 10:11:51.535471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:37.338 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 [2024-11-19 10:11:51.599130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.596 [2024-11-19 10:11:51.599362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.596 [2024-11-19 10:11:51.599420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:37.596 [2024-11-19 10:11:51.599441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.596 [2024-11-19 10:11:51.602682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.596 [2024-11-19 10:11:51.602909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.596 [2024-11-19 10:11:51.603037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:37.596 [2024-11-19 10:11:51.603123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.596 [2024-11-19 10:11:51.603303] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:37.596 [2024-11-19 10:11:51.603327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.596 [2024-11-19 10:11:51.603363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:37.596 [2024-11-19 10:11:51.603476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.596 [2024-11-19 10:11:51.603686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.596 pt1 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.596 "name": "raid_bdev1", 00:17:37.596 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:37.596 "strip_size_kb": 64, 00:17:37.596 "state": "configuring", 00:17:37.596 "raid_level": "raid5f", 00:17:37.596 "superblock": true, 00:17:37.596 "num_base_bdevs": 4, 00:17:37.596 "num_base_bdevs_discovered": 2, 00:17:37.596 "num_base_bdevs_operational": 3, 00:17:37.596 "base_bdevs_list": [ 00:17:37.596 { 00:17:37.596 "name": null, 00:17:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.596 "is_configured": false, 00:17:37.596 "data_offset": 2048, 00:17:37.596 "data_size": 63488 00:17:37.596 }, 00:17:37.596 { 00:17:37.596 "name": "pt2", 00:17:37.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.596 "is_configured": true, 00:17:37.596 "data_offset": 2048, 00:17:37.596 "data_size": 63488 00:17:37.596 }, 00:17:37.596 { 00:17:37.596 "name": "pt3", 00:17:37.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:37.596 "is_configured": true, 00:17:37.596 "data_offset": 2048, 00:17:37.596 "data_size": 63488 00:17:37.596 }, 00:17:37.596 { 00:17:37.596 "name": null, 00:17:37.596 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:37.596 "is_configured": false, 00:17:37.596 "data_offset": 2048, 00:17:37.596 "data_size": 63488 00:17:37.596 } 00:17:37.596 ] 00:17:37.596 }' 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.596 10:11:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.164 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.164 [2024-11-19 10:11:52.167635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:38.164 [2024-11-19 10:11:52.167734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.164 [2024-11-19 10:11:52.167774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:38.164 [2024-11-19 10:11:52.167805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.164 [2024-11-19 10:11:52.168487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.164 [2024-11-19 10:11:52.168520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:38.164 [2024-11-19 10:11:52.168645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:38.164 [2024-11-19 10:11:52.168709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:38.165 [2024-11-19 10:11:52.168952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:38.165 [2024-11-19 10:11:52.168972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:38.165 [2024-11-19 10:11:52.169387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:38.165 [2024-11-19 10:11:52.176651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:38.165 [2024-11-19 10:11:52.176684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:38.165 [2024-11-19 10:11:52.177111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.165 pt4 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.165 "name": "raid_bdev1", 00:17:38.165 "uuid": "53ee63f2-da92-4e5e-bff1-c921603cbedc", 00:17:38.165 "strip_size_kb": 64, 00:17:38.165 "state": "online", 00:17:38.165 "raid_level": "raid5f", 00:17:38.165 "superblock": true, 00:17:38.165 "num_base_bdevs": 4, 00:17:38.165 "num_base_bdevs_discovered": 3, 00:17:38.165 "num_base_bdevs_operational": 3, 00:17:38.165 "base_bdevs_list": [ 00:17:38.165 { 00:17:38.165 "name": null, 00:17:38.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.165 "is_configured": false, 00:17:38.165 "data_offset": 2048, 00:17:38.165 "data_size": 63488 00:17:38.165 }, 00:17:38.165 { 00:17:38.165 "name": "pt2", 00:17:38.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.165 "is_configured": true, 00:17:38.165 "data_offset": 2048, 00:17:38.165 "data_size": 63488 00:17:38.165 }, 00:17:38.165 { 00:17:38.165 "name": "pt3", 00:17:38.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.165 "is_configured": true, 00:17:38.165 "data_offset": 2048, 00:17:38.165 "data_size": 63488 00:17:38.165 }, 00:17:38.165 { 00:17:38.165 "name": "pt4", 00:17:38.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:38.165 "is_configured": true, 00:17:38.165 "data_offset": 2048, 00:17:38.165 "data_size": 63488 00:17:38.165 } 00:17:38.165 ] 00:17:38.165 }' 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.165 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:38.733 [2024-11-19 10:11:52.765725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 53ee63f2-da92-4e5e-bff1-c921603cbedc '!=' 53ee63f2-da92-4e5e-bff1-c921603cbedc ']' 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84516 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84516 ']' 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84516 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84516 00:17:38.733 killing process with pid 84516 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84516' 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84516 00:17:38.733 [2024-11-19 10:11:52.848889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.733 10:11:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84516 00:17:38.733 [2024-11-19 10:11:52.849052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.733 [2024-11-19 10:11:52.849167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.734 [2024-11-19 10:11:52.849190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:39.302 [2024-11-19 10:11:53.252828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.238 10:11:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:40.238 00:17:40.238 real 0m9.833s 00:17:40.238 user 0m15.946s 00:17:40.238 sys 0m1.467s 00:17:40.238 10:11:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.238 10:11:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.238 ************************************ 00:17:40.238 END TEST raid5f_superblock_test 00:17:40.238 ************************************ 00:17:40.498 10:11:54 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:40.498 10:11:54 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:40.498 10:11:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:40.498 10:11:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.498 10:11:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 ************************************ 00:17:40.498 START TEST raid5f_rebuild_test 00:17:40.498 ************************************ 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85007 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85007 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85007 ']' 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.498 10:11:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 [2024-11-19 10:11:54.640969] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:17:40.498 [2024-11-19 10:11:54.641441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:40.498 Zero copy mechanism will not be used. 00:17:40.498 -allocations --file-prefix=spdk_pid85007 ] 00:17:40.757 [2024-11-19 10:11:54.838578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.016 [2024-11-19 10:11:55.022383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.275 [2024-11-19 10:11:55.259596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.275 [2024-11-19 10:11:55.259849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.533 BaseBdev1_malloc 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.533 [2024-11-19 10:11:55.748442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:41.533 [2024-11-19 10:11:55.748537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.533 [2024-11-19 10:11:55.748572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:41.533 [2024-11-19 10:11:55.748592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.533 [2024-11-19 10:11:55.751594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.533 [2024-11-19 10:11:55.751647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.533 BaseBdev1 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.533 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.792 BaseBdev2_malloc 00:17:41.792 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.792 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:41.792 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.792 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.792 [2024-11-19 10:11:55.808871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:41.792 [2024-11-19 10:11:55.808950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.793 [2024-11-19 10:11:55.808980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:41.793 [2024-11-19 10:11:55.809001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.793 [2024-11-19 10:11:55.812604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.793 [2024-11-19 10:11:55.812669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:41.793 BaseBdev2 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 BaseBdev3_malloc 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 [2024-11-19 10:11:55.883427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:41.793 [2024-11-19 10:11:55.883501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.793 [2024-11-19 10:11:55.883534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:41.793 [2024-11-19 10:11:55.883553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.793 [2024-11-19 10:11:55.886481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.793 [2024-11-19 10:11:55.886532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:41.793 BaseBdev3 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 BaseBdev4_malloc 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 [2024-11-19 10:11:55.940576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:41.793 [2024-11-19 10:11:55.940645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.793 [2024-11-19 10:11:55.940676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:41.793 [2024-11-19 10:11:55.940694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.793 [2024-11-19 10:11:55.943640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.793 [2024-11-19 10:11:55.943848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:41.793 BaseBdev4 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 spare_malloc 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 spare_delay 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 [2024-11-19 10:11:56.006920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:41.793 [2024-11-19 10:11:56.007004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.793 [2024-11-19 10:11:56.007043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:41.793 [2024-11-19 10:11:56.007071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.793 [2024-11-19 10:11:56.010201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.793 [2024-11-19 10:11:56.010252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:41.793 spare 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.793 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.793 [2024-11-19 10:11:56.019153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.793 [2024-11-19 10:11:56.021746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.793 [2024-11-19 10:11:56.021978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.793 [2024-11-19 10:11:56.022084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:41.793 [2024-11-19 10:11:56.022212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:41.793 [2024-11-19 10:11:56.022233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:41.793 [2024-11-19 10:11:56.022547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:42.052 [2024-11-19 10:11:56.029460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.052 [2024-11-19 10:11:56.029485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.052 [2024-11-19 10:11:56.029737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.052 "name": "raid_bdev1", 00:17:42.052 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:42.052 "strip_size_kb": 64, 00:17:42.052 "state": "online", 00:17:42.052 "raid_level": "raid5f", 00:17:42.052 "superblock": false, 00:17:42.052 "num_base_bdevs": 4, 00:17:42.052 "num_base_bdevs_discovered": 4, 00:17:42.052 "num_base_bdevs_operational": 4, 00:17:42.052 "base_bdevs_list": [ 00:17:42.052 { 00:17:42.052 "name": "BaseBdev1", 00:17:42.052 "uuid": "c1c8542a-3274-5918-a3de-bad88f20c6d5", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 }, 00:17:42.052 { 00:17:42.052 "name": "BaseBdev2", 00:17:42.052 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 }, 00:17:42.052 { 00:17:42.052 "name": "BaseBdev3", 00:17:42.052 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 }, 00:17:42.052 { 00:17:42.052 "name": "BaseBdev4", 00:17:42.052 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 } 00:17:42.052 ] 00:17:42.052 }' 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.052 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:42.621 [2024-11-19 10:11:56.558146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.621 10:11:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:42.880 [2024-11-19 10:11:56.974008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:42.880 /dev/nbd0 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.880 1+0 records in 00:17:42.880 1+0 records out 00:17:42.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280525 s, 14.6 MB/s 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:42.880 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:42.881 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:43.818 512+0 records in 00:17:43.818 512+0 records out 00:17:43.818 100663296 bytes (101 MB, 96 MiB) copied, 0.662697 s, 152 MB/s 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.818 10:11:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.818 [2024-11-19 10:11:58.035449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.818 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.077 [2024-11-19 10:11:58.055660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.077 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.077 "name": "raid_bdev1", 00:17:44.077 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:44.077 "strip_size_kb": 64, 00:17:44.077 "state": "online", 00:17:44.077 "raid_level": "raid5f", 00:17:44.077 "superblock": false, 00:17:44.077 "num_base_bdevs": 4, 00:17:44.077 "num_base_bdevs_discovered": 3, 00:17:44.077 "num_base_bdevs_operational": 3, 00:17:44.077 "base_bdevs_list": [ 00:17:44.077 { 00:17:44.077 "name": null, 00:17:44.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.077 "is_configured": false, 00:17:44.077 "data_offset": 0, 00:17:44.078 "data_size": 65536 00:17:44.078 }, 00:17:44.078 { 00:17:44.078 "name": "BaseBdev2", 00:17:44.078 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:44.078 "is_configured": true, 00:17:44.078 "data_offset": 0, 00:17:44.078 "data_size": 65536 00:17:44.078 }, 00:17:44.078 { 00:17:44.078 "name": "BaseBdev3", 00:17:44.078 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:44.078 "is_configured": true, 00:17:44.078 "data_offset": 0, 00:17:44.078 "data_size": 65536 00:17:44.078 }, 00:17:44.078 { 00:17:44.078 "name": "BaseBdev4", 00:17:44.078 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:44.078 "is_configured": true, 00:17:44.078 "data_offset": 0, 00:17:44.078 "data_size": 65536 00:17:44.078 } 00:17:44.078 ] 00:17:44.078 }' 00:17:44.078 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.078 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.336 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.336 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.336 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.336 [2024-11-19 10:11:58.563805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.595 [2024-11-19 10:11:58.578375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:44.595 10:11:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.595 10:11:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:44.595 [2024-11-19 10:11:58.587714] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.531 "name": "raid_bdev1", 00:17:45.531 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:45.531 "strip_size_kb": 64, 00:17:45.531 "state": "online", 00:17:45.531 "raid_level": "raid5f", 00:17:45.531 "superblock": false, 00:17:45.531 "num_base_bdevs": 4, 00:17:45.531 "num_base_bdevs_discovered": 4, 00:17:45.531 "num_base_bdevs_operational": 4, 00:17:45.531 "process": { 00:17:45.531 "type": "rebuild", 00:17:45.531 "target": "spare", 00:17:45.531 "progress": { 00:17:45.531 "blocks": 17280, 00:17:45.531 "percent": 8 00:17:45.531 } 00:17:45.531 }, 00:17:45.531 "base_bdevs_list": [ 00:17:45.531 { 00:17:45.531 "name": "spare", 00:17:45.531 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:45.531 "is_configured": true, 00:17:45.531 "data_offset": 0, 00:17:45.531 "data_size": 65536 00:17:45.531 }, 00:17:45.531 { 00:17:45.531 "name": "BaseBdev2", 00:17:45.531 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:45.531 "is_configured": true, 00:17:45.531 "data_offset": 0, 00:17:45.531 "data_size": 65536 00:17:45.531 }, 00:17:45.531 { 00:17:45.531 "name": "BaseBdev3", 00:17:45.531 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:45.531 "is_configured": true, 00:17:45.531 "data_offset": 0, 00:17:45.531 "data_size": 65536 00:17:45.531 }, 00:17:45.531 { 00:17:45.531 "name": "BaseBdev4", 00:17:45.531 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:45.531 "is_configured": true, 00:17:45.531 "data_offset": 0, 00:17:45.531 "data_size": 65536 00:17:45.531 } 00:17:45.531 ] 00:17:45.531 }' 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 [2024-11-19 10:11:59.737159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.789 [2024-11-19 10:11:59.801422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.789 [2024-11-19 10:11:59.801525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.789 [2024-11-19 10:11:59.801554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.789 [2024-11-19 10:11:59.801592] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.789 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.789 "name": "raid_bdev1", 00:17:45.789 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:45.789 "strip_size_kb": 64, 00:17:45.789 "state": "online", 00:17:45.789 "raid_level": "raid5f", 00:17:45.789 "superblock": false, 00:17:45.789 "num_base_bdevs": 4, 00:17:45.789 "num_base_bdevs_discovered": 3, 00:17:45.789 "num_base_bdevs_operational": 3, 00:17:45.789 "base_bdevs_list": [ 00:17:45.789 { 00:17:45.789 "name": null, 00:17:45.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.789 "is_configured": false, 00:17:45.789 "data_offset": 0, 00:17:45.789 "data_size": 65536 00:17:45.789 }, 00:17:45.789 { 00:17:45.789 "name": "BaseBdev2", 00:17:45.789 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:45.789 "is_configured": true, 00:17:45.789 "data_offset": 0, 00:17:45.789 "data_size": 65536 00:17:45.789 }, 00:17:45.789 { 00:17:45.789 "name": "BaseBdev3", 00:17:45.789 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:45.789 "is_configured": true, 00:17:45.789 "data_offset": 0, 00:17:45.789 "data_size": 65536 00:17:45.789 }, 00:17:45.789 { 00:17:45.789 "name": "BaseBdev4", 00:17:45.789 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:45.790 "is_configured": true, 00:17:45.790 "data_offset": 0, 00:17:45.790 "data_size": 65536 00:17:45.790 } 00:17:45.790 ] 00:17:45.790 }' 00:17:45.790 10:11:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.790 10:11:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.401 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.401 "name": "raid_bdev1", 00:17:46.401 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:46.401 "strip_size_kb": 64, 00:17:46.401 "state": "online", 00:17:46.401 "raid_level": "raid5f", 00:17:46.401 "superblock": false, 00:17:46.401 "num_base_bdevs": 4, 00:17:46.401 "num_base_bdevs_discovered": 3, 00:17:46.401 "num_base_bdevs_operational": 3, 00:17:46.401 "base_bdevs_list": [ 00:17:46.401 { 00:17:46.402 "name": null, 00:17:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.402 "is_configured": false, 00:17:46.402 "data_offset": 0, 00:17:46.402 "data_size": 65536 00:17:46.402 }, 00:17:46.402 { 00:17:46.402 "name": "BaseBdev2", 00:17:46.402 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:46.402 "is_configured": true, 00:17:46.402 "data_offset": 0, 00:17:46.402 "data_size": 65536 00:17:46.402 }, 00:17:46.402 { 00:17:46.402 "name": "BaseBdev3", 00:17:46.402 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:46.402 "is_configured": true, 00:17:46.402 "data_offset": 0, 00:17:46.402 "data_size": 65536 00:17:46.402 }, 00:17:46.402 { 00:17:46.402 "name": "BaseBdev4", 00:17:46.402 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:46.402 "is_configured": true, 00:17:46.402 "data_offset": 0, 00:17:46.402 "data_size": 65536 00:17:46.402 } 00:17:46.402 ] 00:17:46.402 }' 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.402 [2024-11-19 10:12:00.527441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.402 [2024-11-19 10:12:00.542308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.402 10:12:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:46.402 [2024-11-19 10:12:00.552203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.339 10:12:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.598 "name": "raid_bdev1", 00:17:47.598 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:47.598 "strip_size_kb": 64, 00:17:47.598 "state": "online", 00:17:47.598 "raid_level": "raid5f", 00:17:47.598 "superblock": false, 00:17:47.598 "num_base_bdevs": 4, 00:17:47.598 "num_base_bdevs_discovered": 4, 00:17:47.598 "num_base_bdevs_operational": 4, 00:17:47.598 "process": { 00:17:47.598 "type": "rebuild", 00:17:47.598 "target": "spare", 00:17:47.598 "progress": { 00:17:47.598 "blocks": 17280, 00:17:47.598 "percent": 8 00:17:47.598 } 00:17:47.598 }, 00:17:47.598 "base_bdevs_list": [ 00:17:47.598 { 00:17:47.598 "name": "spare", 00:17:47.598 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 }, 00:17:47.598 { 00:17:47.598 "name": "BaseBdev2", 00:17:47.598 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 }, 00:17:47.598 { 00:17:47.598 "name": "BaseBdev3", 00:17:47.598 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 }, 00:17:47.598 { 00:17:47.598 "name": "BaseBdev4", 00:17:47.598 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 } 00:17:47.598 ] 00:17:47.598 }' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=690 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.598 "name": "raid_bdev1", 00:17:47.598 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:47.598 "strip_size_kb": 64, 00:17:47.598 "state": "online", 00:17:47.598 "raid_level": "raid5f", 00:17:47.598 "superblock": false, 00:17:47.598 "num_base_bdevs": 4, 00:17:47.598 "num_base_bdevs_discovered": 4, 00:17:47.598 "num_base_bdevs_operational": 4, 00:17:47.598 "process": { 00:17:47.598 "type": "rebuild", 00:17:47.598 "target": "spare", 00:17:47.598 "progress": { 00:17:47.598 "blocks": 21120, 00:17:47.598 "percent": 10 00:17:47.598 } 00:17:47.598 }, 00:17:47.598 "base_bdevs_list": [ 00:17:47.598 { 00:17:47.598 "name": "spare", 00:17:47.598 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 }, 00:17:47.598 { 00:17:47.598 "name": "BaseBdev2", 00:17:47.598 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 }, 00:17:47.598 { 00:17:47.598 "name": "BaseBdev3", 00:17:47.598 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 }, 00:17:47.598 { 00:17:47.598 "name": "BaseBdev4", 00:17:47.598 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:47.598 "is_configured": true, 00:17:47.598 "data_offset": 0, 00:17:47.598 "data_size": 65536 00:17:47.598 } 00:17:47.598 ] 00:17:47.598 }' 00:17:47.598 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.858 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.858 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.858 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.858 10:12:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.796 "name": "raid_bdev1", 00:17:48.796 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:48.796 "strip_size_kb": 64, 00:17:48.796 "state": "online", 00:17:48.796 "raid_level": "raid5f", 00:17:48.796 "superblock": false, 00:17:48.796 "num_base_bdevs": 4, 00:17:48.796 "num_base_bdevs_discovered": 4, 00:17:48.796 "num_base_bdevs_operational": 4, 00:17:48.796 "process": { 00:17:48.796 "type": "rebuild", 00:17:48.796 "target": "spare", 00:17:48.796 "progress": { 00:17:48.796 "blocks": 44160, 00:17:48.796 "percent": 22 00:17:48.796 } 00:17:48.796 }, 00:17:48.796 "base_bdevs_list": [ 00:17:48.796 { 00:17:48.796 "name": "spare", 00:17:48.796 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:48.796 "is_configured": true, 00:17:48.796 "data_offset": 0, 00:17:48.796 "data_size": 65536 00:17:48.796 }, 00:17:48.796 { 00:17:48.796 "name": "BaseBdev2", 00:17:48.796 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:48.796 "is_configured": true, 00:17:48.796 "data_offset": 0, 00:17:48.796 "data_size": 65536 00:17:48.796 }, 00:17:48.796 { 00:17:48.796 "name": "BaseBdev3", 00:17:48.796 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:48.796 "is_configured": true, 00:17:48.796 "data_offset": 0, 00:17:48.796 "data_size": 65536 00:17:48.796 }, 00:17:48.796 { 00:17:48.796 "name": "BaseBdev4", 00:17:48.796 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:48.796 "is_configured": true, 00:17:48.796 "data_offset": 0, 00:17:48.796 "data_size": 65536 00:17:48.796 } 00:17:48.796 ] 00:17:48.796 }' 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.796 10:12:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.796 10:12:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.055 10:12:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.055 10:12:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.992 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.992 "name": "raid_bdev1", 00:17:49.992 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:49.992 "strip_size_kb": 64, 00:17:49.992 "state": "online", 00:17:49.992 "raid_level": "raid5f", 00:17:49.992 "superblock": false, 00:17:49.992 "num_base_bdevs": 4, 00:17:49.992 "num_base_bdevs_discovered": 4, 00:17:49.992 "num_base_bdevs_operational": 4, 00:17:49.992 "process": { 00:17:49.992 "type": "rebuild", 00:17:49.992 "target": "spare", 00:17:49.992 "progress": { 00:17:49.993 "blocks": 65280, 00:17:49.993 "percent": 33 00:17:49.993 } 00:17:49.993 }, 00:17:49.993 "base_bdevs_list": [ 00:17:49.993 { 00:17:49.993 "name": "spare", 00:17:49.993 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:49.993 "is_configured": true, 00:17:49.993 "data_offset": 0, 00:17:49.993 "data_size": 65536 00:17:49.993 }, 00:17:49.993 { 00:17:49.993 "name": "BaseBdev2", 00:17:49.993 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:49.993 "is_configured": true, 00:17:49.993 "data_offset": 0, 00:17:49.993 "data_size": 65536 00:17:49.993 }, 00:17:49.993 { 00:17:49.993 "name": "BaseBdev3", 00:17:49.993 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:49.993 "is_configured": true, 00:17:49.993 "data_offset": 0, 00:17:49.993 "data_size": 65536 00:17:49.993 }, 00:17:49.993 { 00:17:49.993 "name": "BaseBdev4", 00:17:49.993 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:49.993 "is_configured": true, 00:17:49.993 "data_offset": 0, 00:17:49.993 "data_size": 65536 00:17:49.993 } 00:17:49.993 ] 00:17:49.993 }' 00:17:49.993 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.993 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.993 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.993 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.993 10:12:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.369 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.369 "name": "raid_bdev1", 00:17:51.369 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:51.369 "strip_size_kb": 64, 00:17:51.369 "state": "online", 00:17:51.369 "raid_level": "raid5f", 00:17:51.369 "superblock": false, 00:17:51.369 "num_base_bdevs": 4, 00:17:51.369 "num_base_bdevs_discovered": 4, 00:17:51.369 "num_base_bdevs_operational": 4, 00:17:51.369 "process": { 00:17:51.369 "type": "rebuild", 00:17:51.369 "target": "spare", 00:17:51.369 "progress": { 00:17:51.369 "blocks": 88320, 00:17:51.369 "percent": 44 00:17:51.369 } 00:17:51.369 }, 00:17:51.369 "base_bdevs_list": [ 00:17:51.369 { 00:17:51.369 "name": "spare", 00:17:51.369 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:51.369 "is_configured": true, 00:17:51.369 "data_offset": 0, 00:17:51.369 "data_size": 65536 00:17:51.369 }, 00:17:51.369 { 00:17:51.369 "name": "BaseBdev2", 00:17:51.369 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:51.369 "is_configured": true, 00:17:51.369 "data_offset": 0, 00:17:51.369 "data_size": 65536 00:17:51.369 }, 00:17:51.369 { 00:17:51.369 "name": "BaseBdev3", 00:17:51.369 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:51.369 "is_configured": true, 00:17:51.369 "data_offset": 0, 00:17:51.370 "data_size": 65536 00:17:51.370 }, 00:17:51.370 { 00:17:51.370 "name": "BaseBdev4", 00:17:51.370 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:51.370 "is_configured": true, 00:17:51.370 "data_offset": 0, 00:17:51.370 "data_size": 65536 00:17:51.370 } 00:17:51.370 ] 00:17:51.370 }' 00:17:51.370 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.370 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.370 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.370 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.370 10:12:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.306 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.307 "name": "raid_bdev1", 00:17:52.307 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:52.307 "strip_size_kb": 64, 00:17:52.307 "state": "online", 00:17:52.307 "raid_level": "raid5f", 00:17:52.307 "superblock": false, 00:17:52.307 "num_base_bdevs": 4, 00:17:52.307 "num_base_bdevs_discovered": 4, 00:17:52.307 "num_base_bdevs_operational": 4, 00:17:52.307 "process": { 00:17:52.307 "type": "rebuild", 00:17:52.307 "target": "spare", 00:17:52.307 "progress": { 00:17:52.307 "blocks": 109440, 00:17:52.307 "percent": 55 00:17:52.307 } 00:17:52.307 }, 00:17:52.307 "base_bdevs_list": [ 00:17:52.307 { 00:17:52.307 "name": "spare", 00:17:52.307 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:52.307 "is_configured": true, 00:17:52.307 "data_offset": 0, 00:17:52.307 "data_size": 65536 00:17:52.307 }, 00:17:52.307 { 00:17:52.307 "name": "BaseBdev2", 00:17:52.307 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:52.307 "is_configured": true, 00:17:52.307 "data_offset": 0, 00:17:52.307 "data_size": 65536 00:17:52.307 }, 00:17:52.307 { 00:17:52.307 "name": "BaseBdev3", 00:17:52.307 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:52.307 "is_configured": true, 00:17:52.307 "data_offset": 0, 00:17:52.307 "data_size": 65536 00:17:52.307 }, 00:17:52.307 { 00:17:52.307 "name": "BaseBdev4", 00:17:52.307 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:52.307 "is_configured": true, 00:17:52.307 "data_offset": 0, 00:17:52.307 "data_size": 65536 00:17:52.307 } 00:17:52.307 ] 00:17:52.307 }' 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.307 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.565 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.565 10:12:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:53.502 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.502 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.502 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.502 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.502 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.503 "name": "raid_bdev1", 00:17:53.503 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:53.503 "strip_size_kb": 64, 00:17:53.503 "state": "online", 00:17:53.503 "raid_level": "raid5f", 00:17:53.503 "superblock": false, 00:17:53.503 "num_base_bdevs": 4, 00:17:53.503 "num_base_bdevs_discovered": 4, 00:17:53.503 "num_base_bdevs_operational": 4, 00:17:53.503 "process": { 00:17:53.503 "type": "rebuild", 00:17:53.503 "target": "spare", 00:17:53.503 "progress": { 00:17:53.503 "blocks": 132480, 00:17:53.503 "percent": 67 00:17:53.503 } 00:17:53.503 }, 00:17:53.503 "base_bdevs_list": [ 00:17:53.503 { 00:17:53.503 "name": "spare", 00:17:53.503 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:53.503 "is_configured": true, 00:17:53.503 "data_offset": 0, 00:17:53.503 "data_size": 65536 00:17:53.503 }, 00:17:53.503 { 00:17:53.503 "name": "BaseBdev2", 00:17:53.503 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:53.503 "is_configured": true, 00:17:53.503 "data_offset": 0, 00:17:53.503 "data_size": 65536 00:17:53.503 }, 00:17:53.503 { 00:17:53.503 "name": "BaseBdev3", 00:17:53.503 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:53.503 "is_configured": true, 00:17:53.503 "data_offset": 0, 00:17:53.503 "data_size": 65536 00:17:53.503 }, 00:17:53.503 { 00:17:53.503 "name": "BaseBdev4", 00:17:53.503 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:53.503 "is_configured": true, 00:17:53.503 "data_offset": 0, 00:17:53.503 "data_size": 65536 00:17:53.503 } 00:17:53.503 ] 00:17:53.503 }' 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.503 10:12:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.879 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.879 "name": "raid_bdev1", 00:17:54.879 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:54.879 "strip_size_kb": 64, 00:17:54.879 "state": "online", 00:17:54.879 "raid_level": "raid5f", 00:17:54.879 "superblock": false, 00:17:54.879 "num_base_bdevs": 4, 00:17:54.879 "num_base_bdevs_discovered": 4, 00:17:54.879 "num_base_bdevs_operational": 4, 00:17:54.879 "process": { 00:17:54.879 "type": "rebuild", 00:17:54.879 "target": "spare", 00:17:54.879 "progress": { 00:17:54.879 "blocks": 153600, 00:17:54.879 "percent": 78 00:17:54.879 } 00:17:54.879 }, 00:17:54.879 "base_bdevs_list": [ 00:17:54.879 { 00:17:54.879 "name": "spare", 00:17:54.880 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:54.880 "is_configured": true, 00:17:54.880 "data_offset": 0, 00:17:54.880 "data_size": 65536 00:17:54.880 }, 00:17:54.880 { 00:17:54.880 "name": "BaseBdev2", 00:17:54.880 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:54.880 "is_configured": true, 00:17:54.880 "data_offset": 0, 00:17:54.880 "data_size": 65536 00:17:54.880 }, 00:17:54.880 { 00:17:54.880 "name": "BaseBdev3", 00:17:54.880 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:54.880 "is_configured": true, 00:17:54.880 "data_offset": 0, 00:17:54.880 "data_size": 65536 00:17:54.880 }, 00:17:54.880 { 00:17:54.880 "name": "BaseBdev4", 00:17:54.880 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:54.880 "is_configured": true, 00:17:54.880 "data_offset": 0, 00:17:54.880 "data_size": 65536 00:17:54.880 } 00:17:54.880 ] 00:17:54.880 }' 00:17:54.880 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.880 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.880 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.880 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.880 10:12:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.814 "name": "raid_bdev1", 00:17:55.814 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:55.814 "strip_size_kb": 64, 00:17:55.814 "state": "online", 00:17:55.814 "raid_level": "raid5f", 00:17:55.814 "superblock": false, 00:17:55.814 "num_base_bdevs": 4, 00:17:55.814 "num_base_bdevs_discovered": 4, 00:17:55.814 "num_base_bdevs_operational": 4, 00:17:55.814 "process": { 00:17:55.814 "type": "rebuild", 00:17:55.814 "target": "spare", 00:17:55.814 "progress": { 00:17:55.814 "blocks": 176640, 00:17:55.814 "percent": 89 00:17:55.814 } 00:17:55.814 }, 00:17:55.814 "base_bdevs_list": [ 00:17:55.814 { 00:17:55.814 "name": "spare", 00:17:55.814 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:55.814 "is_configured": true, 00:17:55.814 "data_offset": 0, 00:17:55.814 "data_size": 65536 00:17:55.814 }, 00:17:55.814 { 00:17:55.814 "name": "BaseBdev2", 00:17:55.814 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:55.814 "is_configured": true, 00:17:55.814 "data_offset": 0, 00:17:55.814 "data_size": 65536 00:17:55.814 }, 00:17:55.814 { 00:17:55.814 "name": "BaseBdev3", 00:17:55.814 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:55.814 "is_configured": true, 00:17:55.814 "data_offset": 0, 00:17:55.814 "data_size": 65536 00:17:55.814 }, 00:17:55.814 { 00:17:55.814 "name": "BaseBdev4", 00:17:55.814 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:55.814 "is_configured": true, 00:17:55.814 "data_offset": 0, 00:17:55.814 "data_size": 65536 00:17:55.814 } 00:17:55.814 ] 00:17:55.814 }' 00:17:55.814 10:12:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.814 10:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.814 10:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.074 10:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.074 10:12:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.010 [2024-11-19 10:12:10.967121] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:57.010 [2024-11-19 10:12:10.967233] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:57.010 [2024-11-19 10:12:10.967302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.010 "name": "raid_bdev1", 00:17:57.010 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:57.010 "strip_size_kb": 64, 00:17:57.010 "state": "online", 00:17:57.010 "raid_level": "raid5f", 00:17:57.010 "superblock": false, 00:17:57.010 "num_base_bdevs": 4, 00:17:57.010 "num_base_bdevs_discovered": 4, 00:17:57.010 "num_base_bdevs_operational": 4, 00:17:57.010 "base_bdevs_list": [ 00:17:57.010 { 00:17:57.010 "name": "spare", 00:17:57.010 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:57.010 "is_configured": true, 00:17:57.010 "data_offset": 0, 00:17:57.010 "data_size": 65536 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": "BaseBdev2", 00:17:57.010 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:57.010 "is_configured": true, 00:17:57.010 "data_offset": 0, 00:17:57.010 "data_size": 65536 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": "BaseBdev3", 00:17:57.010 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:57.010 "is_configured": true, 00:17:57.010 "data_offset": 0, 00:17:57.010 "data_size": 65536 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": "BaseBdev4", 00:17:57.010 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:57.010 "is_configured": true, 00:17:57.010 "data_offset": 0, 00:17:57.010 "data_size": 65536 00:17:57.010 } 00:17:57.010 ] 00:17:57.010 }' 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.010 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.270 "name": "raid_bdev1", 00:17:57.270 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:57.270 "strip_size_kb": 64, 00:17:57.270 "state": "online", 00:17:57.270 "raid_level": "raid5f", 00:17:57.270 "superblock": false, 00:17:57.270 "num_base_bdevs": 4, 00:17:57.270 "num_base_bdevs_discovered": 4, 00:17:57.270 "num_base_bdevs_operational": 4, 00:17:57.270 "base_bdevs_list": [ 00:17:57.270 { 00:17:57.270 "name": "spare", 00:17:57.270 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 }, 00:17:57.270 { 00:17:57.270 "name": "BaseBdev2", 00:17:57.270 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 }, 00:17:57.270 { 00:17:57.270 "name": "BaseBdev3", 00:17:57.270 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 }, 00:17:57.270 { 00:17:57.270 "name": "BaseBdev4", 00:17:57.270 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 } 00:17:57.270 ] 00:17:57.270 }' 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.270 "name": "raid_bdev1", 00:17:57.270 "uuid": "9834a627-ae34-4e18-acf3-e0b377727def", 00:17:57.270 "strip_size_kb": 64, 00:17:57.270 "state": "online", 00:17:57.270 "raid_level": "raid5f", 00:17:57.270 "superblock": false, 00:17:57.270 "num_base_bdevs": 4, 00:17:57.270 "num_base_bdevs_discovered": 4, 00:17:57.270 "num_base_bdevs_operational": 4, 00:17:57.270 "base_bdevs_list": [ 00:17:57.270 { 00:17:57.270 "name": "spare", 00:17:57.270 "uuid": "e558d211-f5d4-5a8e-b34f-bc0e61644402", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 }, 00:17:57.270 { 00:17:57.270 "name": "BaseBdev2", 00:17:57.270 "uuid": "149db4bf-b4ee-5bc2-af98-cb15079f3980", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 }, 00:17:57.270 { 00:17:57.270 "name": "BaseBdev3", 00:17:57.270 "uuid": "415be1ec-f375-5d8c-9bd8-f416bbe72061", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 }, 00:17:57.270 { 00:17:57.270 "name": "BaseBdev4", 00:17:57.270 "uuid": "aa09190f-723a-59eb-bfac-136748e82779", 00:17:57.270 "is_configured": true, 00:17:57.270 "data_offset": 0, 00:17:57.270 "data_size": 65536 00:17:57.270 } 00:17:57.270 ] 00:17:57.270 }' 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.270 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.838 [2024-11-19 10:12:11.870803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.838 [2024-11-19 10:12:11.870856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.838 [2024-11-19 10:12:11.870982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.838 [2024-11-19 10:12:11.871118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.838 [2024-11-19 10:12:11.871136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.838 10:12:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:58.098 /dev/nbd0 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:58.098 1+0 records in 00:17:58.098 1+0 records out 00:17:58.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379287 s, 10.8 MB/s 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:58.098 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:58.356 /dev/nbd1 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:58.615 1+0 records in 00:17:58.615 1+0 records out 00:17:58.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321681 s, 12.7 MB/s 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.615 10:12:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:59.182 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85007 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85007 ']' 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85007 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85007 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.442 killing process with pid 85007 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85007' 00:17:59.442 Received shutdown signal, test time was about 60.000000 seconds 00:17:59.442 00:17:59.442 Latency(us) 00:17:59.442 [2024-11-19T10:12:13.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.442 [2024-11-19T10:12:13.674Z] =================================================================================================================== 00:17:59.442 [2024-11-19T10:12:13.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85007 00:17:59.442 10:12:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85007 00:17:59.442 [2024-11-19 10:12:13.550883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.011 [2024-11-19 10:12:14.032994] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.947 10:12:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:00.947 00:18:00.947 real 0m20.626s 00:18:00.947 user 0m25.752s 00:18:00.947 sys 0m2.363s 00:18:00.947 10:12:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.947 10:12:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.947 ************************************ 00:18:00.947 END TEST raid5f_rebuild_test 00:18:00.947 ************************************ 00:18:01.206 10:12:15 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:01.206 10:12:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:01.206 10:12:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.206 10:12:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.206 ************************************ 00:18:01.206 START TEST raid5f_rebuild_test_sb 00:18:01.206 ************************************ 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85522 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85522 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85522 ']' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.206 10:12:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.206 [2024-11-19 10:12:15.350690] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:01.206 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:01.206 Zero copy mechanism will not be used. 00:18:01.206 [2024-11-19 10:12:15.350993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85522 ] 00:18:01.464 [2024-11-19 10:12:15.551966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.722 [2024-11-19 10:12:15.697943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.722 [2024-11-19 10:12:15.924654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.722 [2024-11-19 10:12:15.924744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.289 BaseBdev1_malloc 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.289 [2024-11-19 10:12:16.370517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.289 [2024-11-19 10:12:16.370620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.289 [2024-11-19 10:12:16.370658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:02.289 [2024-11-19 10:12:16.370679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.289 [2024-11-19 10:12:16.373685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.289 [2024-11-19 10:12:16.373740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.289 BaseBdev1 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.289 BaseBdev2_malloc 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:02.289 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.290 [2024-11-19 10:12:16.430701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:02.290 [2024-11-19 10:12:16.430792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.290 [2024-11-19 10:12:16.430826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.290 [2024-11-19 10:12:16.430848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.290 [2024-11-19 10:12:16.433845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.290 [2024-11-19 10:12:16.433892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.290 BaseBdev2 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.290 BaseBdev3_malloc 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.290 [2024-11-19 10:12:16.505965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:02.290 [2024-11-19 10:12:16.506060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.290 [2024-11-19 10:12:16.506102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:02.290 [2024-11-19 10:12:16.506123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.290 [2024-11-19 10:12:16.509080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.290 [2024-11-19 10:12:16.509131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:02.290 BaseBdev3 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.290 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 BaseBdev4_malloc 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 [2024-11-19 10:12:16.562774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:02.549 [2024-11-19 10:12:16.562860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.549 [2024-11-19 10:12:16.562892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:02.549 [2024-11-19 10:12:16.562912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.549 [2024-11-19 10:12:16.565838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.549 [2024-11-19 10:12:16.565889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:02.549 BaseBdev4 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 spare_malloc 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 spare_delay 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 [2024-11-19 10:12:16.626332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.549 [2024-11-19 10:12:16.626409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.549 [2024-11-19 10:12:16.626448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:02.549 [2024-11-19 10:12:16.626467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.549 [2024-11-19 10:12:16.629436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.549 [2024-11-19 10:12:16.629487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.549 spare 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 [2024-11-19 10:12:16.634474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.549 [2024-11-19 10:12:16.637095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.549 [2024-11-19 10:12:16.637202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.549 [2024-11-19 10:12:16.637285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:02.549 [2024-11-19 10:12:16.637558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:02.549 [2024-11-19 10:12:16.637594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:02.549 [2024-11-19 10:12:16.637942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:02.549 [2024-11-19 10:12:16.644920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:02.549 [2024-11-19 10:12:16.644951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:02.549 [2024-11-19 10:12:16.645214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.549 "name": "raid_bdev1", 00:18:02.549 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:02.549 "strip_size_kb": 64, 00:18:02.549 "state": "online", 00:18:02.549 "raid_level": "raid5f", 00:18:02.549 "superblock": true, 00:18:02.549 "num_base_bdevs": 4, 00:18:02.549 "num_base_bdevs_discovered": 4, 00:18:02.549 "num_base_bdevs_operational": 4, 00:18:02.549 "base_bdevs_list": [ 00:18:02.549 { 00:18:02.549 "name": "BaseBdev1", 00:18:02.549 "uuid": "02946fc6-1c11-5d0a-9f7c-11463ed34d68", 00:18:02.549 "is_configured": true, 00:18:02.549 "data_offset": 2048, 00:18:02.549 "data_size": 63488 00:18:02.549 }, 00:18:02.549 { 00:18:02.549 "name": "BaseBdev2", 00:18:02.549 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:02.549 "is_configured": true, 00:18:02.549 "data_offset": 2048, 00:18:02.549 "data_size": 63488 00:18:02.549 }, 00:18:02.549 { 00:18:02.549 "name": "BaseBdev3", 00:18:02.549 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:02.549 "is_configured": true, 00:18:02.549 "data_offset": 2048, 00:18:02.549 "data_size": 63488 00:18:02.549 }, 00:18:02.549 { 00:18:02.549 "name": "BaseBdev4", 00:18:02.549 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:02.549 "is_configured": true, 00:18:02.549 "data_offset": 2048, 00:18:02.549 "data_size": 63488 00:18:02.549 } 00:18:02.549 ] 00:18:02.549 }' 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.549 10:12:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.116 [2024-11-19 10:12:17.181695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.116 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:03.375 [2024-11-19 10:12:17.585598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:03.375 /dev/nbd0 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.633 1+0 records in 00:18:03.633 1+0 records out 00:18:03.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336147 s, 12.2 MB/s 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:03.633 10:12:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:04.283 496+0 records in 00:18:04.283 496+0 records out 00:18:04.283 97517568 bytes (98 MB, 93 MiB) copied, 0.686558 s, 142 MB/s 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.283 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.542 [2024-11-19 10:12:18.668552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.542 [2024-11-19 10:12:18.704835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.542 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.543 "name": "raid_bdev1", 00:18:04.543 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:04.543 "strip_size_kb": 64, 00:18:04.543 "state": "online", 00:18:04.543 "raid_level": "raid5f", 00:18:04.543 "superblock": true, 00:18:04.543 "num_base_bdevs": 4, 00:18:04.543 "num_base_bdevs_discovered": 3, 00:18:04.543 "num_base_bdevs_operational": 3, 00:18:04.543 "base_bdevs_list": [ 00:18:04.543 { 00:18:04.543 "name": null, 00:18:04.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.543 "is_configured": false, 00:18:04.543 "data_offset": 0, 00:18:04.543 "data_size": 63488 00:18:04.543 }, 00:18:04.543 { 00:18:04.543 "name": "BaseBdev2", 00:18:04.543 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:04.543 "is_configured": true, 00:18:04.543 "data_offset": 2048, 00:18:04.543 "data_size": 63488 00:18:04.543 }, 00:18:04.543 { 00:18:04.543 "name": "BaseBdev3", 00:18:04.543 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:04.543 "is_configured": true, 00:18:04.543 "data_offset": 2048, 00:18:04.543 "data_size": 63488 00:18:04.543 }, 00:18:04.543 { 00:18:04.543 "name": "BaseBdev4", 00:18:04.543 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:04.543 "is_configured": true, 00:18:04.543 "data_offset": 2048, 00:18:04.543 "data_size": 63488 00:18:04.543 } 00:18:04.543 ] 00:18:04.543 }' 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.543 10:12:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.111 10:12:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.111 10:12:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.111 10:12:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.111 [2024-11-19 10:12:19.212934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.111 [2024-11-19 10:12:19.227680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:05.111 10:12:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.111 10:12:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:05.111 [2024-11-19 10:12:19.237151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.047 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.307 "name": "raid_bdev1", 00:18:06.307 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:06.307 "strip_size_kb": 64, 00:18:06.307 "state": "online", 00:18:06.307 "raid_level": "raid5f", 00:18:06.307 "superblock": true, 00:18:06.307 "num_base_bdevs": 4, 00:18:06.307 "num_base_bdevs_discovered": 4, 00:18:06.307 "num_base_bdevs_operational": 4, 00:18:06.307 "process": { 00:18:06.307 "type": "rebuild", 00:18:06.307 "target": "spare", 00:18:06.307 "progress": { 00:18:06.307 "blocks": 17280, 00:18:06.307 "percent": 9 00:18:06.307 } 00:18:06.307 }, 00:18:06.307 "base_bdevs_list": [ 00:18:06.307 { 00:18:06.307 "name": "spare", 00:18:06.307 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 }, 00:18:06.307 { 00:18:06.307 "name": "BaseBdev2", 00:18:06.307 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 }, 00:18:06.307 { 00:18:06.307 "name": "BaseBdev3", 00:18:06.307 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 }, 00:18:06.307 { 00:18:06.307 "name": "BaseBdev4", 00:18:06.307 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 } 00:18:06.307 ] 00:18:06.307 }' 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 [2024-11-19 10:12:20.399077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.307 [2024-11-19 10:12:20.451353] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.307 [2024-11-19 10:12:20.451484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.307 [2024-11-19 10:12:20.451513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.307 [2024-11-19 10:12:20.451530] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.307 "name": "raid_bdev1", 00:18:06.307 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:06.307 "strip_size_kb": 64, 00:18:06.307 "state": "online", 00:18:06.307 "raid_level": "raid5f", 00:18:06.307 "superblock": true, 00:18:06.307 "num_base_bdevs": 4, 00:18:06.307 "num_base_bdevs_discovered": 3, 00:18:06.307 "num_base_bdevs_operational": 3, 00:18:06.307 "base_bdevs_list": [ 00:18:06.307 { 00:18:06.307 "name": null, 00:18:06.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.307 "is_configured": false, 00:18:06.307 "data_offset": 0, 00:18:06.307 "data_size": 63488 00:18:06.307 }, 00:18:06.307 { 00:18:06.307 "name": "BaseBdev2", 00:18:06.307 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 }, 00:18:06.307 { 00:18:06.307 "name": "BaseBdev3", 00:18:06.307 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 }, 00:18:06.307 { 00:18:06.307 "name": "BaseBdev4", 00:18:06.307 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:06.307 "is_configured": true, 00:18:06.307 "data_offset": 2048, 00:18:06.307 "data_size": 63488 00:18:06.307 } 00:18:06.307 ] 00:18:06.307 }' 00:18:06.307 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.308 10:12:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.875 "name": "raid_bdev1", 00:18:06.875 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:06.875 "strip_size_kb": 64, 00:18:06.875 "state": "online", 00:18:06.875 "raid_level": "raid5f", 00:18:06.875 "superblock": true, 00:18:06.875 "num_base_bdevs": 4, 00:18:06.875 "num_base_bdevs_discovered": 3, 00:18:06.875 "num_base_bdevs_operational": 3, 00:18:06.875 "base_bdevs_list": [ 00:18:06.875 { 00:18:06.875 "name": null, 00:18:06.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.875 "is_configured": false, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 63488 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev2", 00:18:06.875 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 2048, 00:18:06.875 "data_size": 63488 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev3", 00:18:06.875 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 2048, 00:18:06.875 "data_size": 63488 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev4", 00:18:06.875 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 2048, 00:18:06.875 "data_size": 63488 00:18:06.875 } 00:18:06.875 ] 00:18:06.875 }' 00:18:06.875 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.134 [2024-11-19 10:12:21.173304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.134 [2024-11-19 10:12:21.187393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.134 10:12:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:07.134 [2024-11-19 10:12:21.196437] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.070 "name": "raid_bdev1", 00:18:08.070 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:08.070 "strip_size_kb": 64, 00:18:08.070 "state": "online", 00:18:08.070 "raid_level": "raid5f", 00:18:08.070 "superblock": true, 00:18:08.070 "num_base_bdevs": 4, 00:18:08.070 "num_base_bdevs_discovered": 4, 00:18:08.070 "num_base_bdevs_operational": 4, 00:18:08.070 "process": { 00:18:08.070 "type": "rebuild", 00:18:08.070 "target": "spare", 00:18:08.070 "progress": { 00:18:08.070 "blocks": 17280, 00:18:08.070 "percent": 9 00:18:08.070 } 00:18:08.070 }, 00:18:08.070 "base_bdevs_list": [ 00:18:08.070 { 00:18:08.070 "name": "spare", 00:18:08.070 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:08.070 "is_configured": true, 00:18:08.070 "data_offset": 2048, 00:18:08.070 "data_size": 63488 00:18:08.070 }, 00:18:08.070 { 00:18:08.070 "name": "BaseBdev2", 00:18:08.070 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:08.070 "is_configured": true, 00:18:08.070 "data_offset": 2048, 00:18:08.070 "data_size": 63488 00:18:08.070 }, 00:18:08.070 { 00:18:08.070 "name": "BaseBdev3", 00:18:08.070 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:08.070 "is_configured": true, 00:18:08.070 "data_offset": 2048, 00:18:08.070 "data_size": 63488 00:18:08.070 }, 00:18:08.070 { 00:18:08.070 "name": "BaseBdev4", 00:18:08.070 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:08.070 "is_configured": true, 00:18:08.070 "data_offset": 2048, 00:18:08.070 "data_size": 63488 00:18:08.070 } 00:18:08.070 ] 00:18:08.070 }' 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.070 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:08.329 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=711 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.329 "name": "raid_bdev1", 00:18:08.329 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:08.329 "strip_size_kb": 64, 00:18:08.329 "state": "online", 00:18:08.329 "raid_level": "raid5f", 00:18:08.329 "superblock": true, 00:18:08.329 "num_base_bdevs": 4, 00:18:08.329 "num_base_bdevs_discovered": 4, 00:18:08.329 "num_base_bdevs_operational": 4, 00:18:08.329 "process": { 00:18:08.329 "type": "rebuild", 00:18:08.329 "target": "spare", 00:18:08.329 "progress": { 00:18:08.329 "blocks": 21120, 00:18:08.329 "percent": 11 00:18:08.329 } 00:18:08.329 }, 00:18:08.329 "base_bdevs_list": [ 00:18:08.329 { 00:18:08.329 "name": "spare", 00:18:08.329 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:08.329 "is_configured": true, 00:18:08.329 "data_offset": 2048, 00:18:08.329 "data_size": 63488 00:18:08.329 }, 00:18:08.329 { 00:18:08.329 "name": "BaseBdev2", 00:18:08.329 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:08.329 "is_configured": true, 00:18:08.329 "data_offset": 2048, 00:18:08.329 "data_size": 63488 00:18:08.329 }, 00:18:08.329 { 00:18:08.329 "name": "BaseBdev3", 00:18:08.329 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:08.329 "is_configured": true, 00:18:08.329 "data_offset": 2048, 00:18:08.329 "data_size": 63488 00:18:08.329 }, 00:18:08.329 { 00:18:08.329 "name": "BaseBdev4", 00:18:08.329 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:08.329 "is_configured": true, 00:18:08.329 "data_offset": 2048, 00:18:08.329 "data_size": 63488 00:18:08.329 } 00:18:08.329 ] 00:18:08.329 }' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.329 10:12:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.705 "name": "raid_bdev1", 00:18:09.705 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:09.705 "strip_size_kb": 64, 00:18:09.705 "state": "online", 00:18:09.705 "raid_level": "raid5f", 00:18:09.705 "superblock": true, 00:18:09.705 "num_base_bdevs": 4, 00:18:09.705 "num_base_bdevs_discovered": 4, 00:18:09.705 "num_base_bdevs_operational": 4, 00:18:09.705 "process": { 00:18:09.705 "type": "rebuild", 00:18:09.705 "target": "spare", 00:18:09.705 "progress": { 00:18:09.705 "blocks": 44160, 00:18:09.705 "percent": 23 00:18:09.705 } 00:18:09.705 }, 00:18:09.705 "base_bdevs_list": [ 00:18:09.705 { 00:18:09.705 "name": "spare", 00:18:09.705 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:09.705 "is_configured": true, 00:18:09.705 "data_offset": 2048, 00:18:09.705 "data_size": 63488 00:18:09.705 }, 00:18:09.705 { 00:18:09.705 "name": "BaseBdev2", 00:18:09.705 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:09.705 "is_configured": true, 00:18:09.705 "data_offset": 2048, 00:18:09.705 "data_size": 63488 00:18:09.705 }, 00:18:09.705 { 00:18:09.705 "name": "BaseBdev3", 00:18:09.705 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:09.705 "is_configured": true, 00:18:09.705 "data_offset": 2048, 00:18:09.705 "data_size": 63488 00:18:09.705 }, 00:18:09.705 { 00:18:09.705 "name": "BaseBdev4", 00:18:09.705 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:09.705 "is_configured": true, 00:18:09.705 "data_offset": 2048, 00:18:09.705 "data_size": 63488 00:18:09.705 } 00:18:09.705 ] 00:18:09.705 }' 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.705 10:12:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.640 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.640 "name": "raid_bdev1", 00:18:10.640 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:10.640 "strip_size_kb": 64, 00:18:10.640 "state": "online", 00:18:10.640 "raid_level": "raid5f", 00:18:10.640 "superblock": true, 00:18:10.640 "num_base_bdevs": 4, 00:18:10.640 "num_base_bdevs_discovered": 4, 00:18:10.640 "num_base_bdevs_operational": 4, 00:18:10.640 "process": { 00:18:10.640 "type": "rebuild", 00:18:10.640 "target": "spare", 00:18:10.640 "progress": { 00:18:10.640 "blocks": 65280, 00:18:10.640 "percent": 34 00:18:10.640 } 00:18:10.640 }, 00:18:10.640 "base_bdevs_list": [ 00:18:10.640 { 00:18:10.640 "name": "spare", 00:18:10.640 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:10.640 "is_configured": true, 00:18:10.640 "data_offset": 2048, 00:18:10.640 "data_size": 63488 00:18:10.640 }, 00:18:10.640 { 00:18:10.640 "name": "BaseBdev2", 00:18:10.640 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:10.640 "is_configured": true, 00:18:10.640 "data_offset": 2048, 00:18:10.640 "data_size": 63488 00:18:10.640 }, 00:18:10.640 { 00:18:10.640 "name": "BaseBdev3", 00:18:10.640 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:10.640 "is_configured": true, 00:18:10.640 "data_offset": 2048, 00:18:10.640 "data_size": 63488 00:18:10.640 }, 00:18:10.640 { 00:18:10.640 "name": "BaseBdev4", 00:18:10.640 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:10.640 "is_configured": true, 00:18:10.640 "data_offset": 2048, 00:18:10.640 "data_size": 63488 00:18:10.640 } 00:18:10.640 ] 00:18:10.641 }' 00:18:10.641 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.641 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.641 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.641 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.641 10:12:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.016 "name": "raid_bdev1", 00:18:12.016 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:12.016 "strip_size_kb": 64, 00:18:12.016 "state": "online", 00:18:12.016 "raid_level": "raid5f", 00:18:12.016 "superblock": true, 00:18:12.016 "num_base_bdevs": 4, 00:18:12.016 "num_base_bdevs_discovered": 4, 00:18:12.016 "num_base_bdevs_operational": 4, 00:18:12.016 "process": { 00:18:12.016 "type": "rebuild", 00:18:12.016 "target": "spare", 00:18:12.016 "progress": { 00:18:12.016 "blocks": 88320, 00:18:12.016 "percent": 46 00:18:12.016 } 00:18:12.016 }, 00:18:12.016 "base_bdevs_list": [ 00:18:12.016 { 00:18:12.016 "name": "spare", 00:18:12.016 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:12.016 "is_configured": true, 00:18:12.016 "data_offset": 2048, 00:18:12.016 "data_size": 63488 00:18:12.016 }, 00:18:12.016 { 00:18:12.016 "name": "BaseBdev2", 00:18:12.016 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:12.016 "is_configured": true, 00:18:12.016 "data_offset": 2048, 00:18:12.016 "data_size": 63488 00:18:12.016 }, 00:18:12.016 { 00:18:12.016 "name": "BaseBdev3", 00:18:12.016 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:12.016 "is_configured": true, 00:18:12.016 "data_offset": 2048, 00:18:12.016 "data_size": 63488 00:18:12.016 }, 00:18:12.016 { 00:18:12.016 "name": "BaseBdev4", 00:18:12.016 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:12.016 "is_configured": true, 00:18:12.016 "data_offset": 2048, 00:18:12.016 "data_size": 63488 00:18:12.016 } 00:18:12.016 ] 00:18:12.016 }' 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.016 10:12:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.016 10:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.016 10:12:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.951 "name": "raid_bdev1", 00:18:12.951 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:12.951 "strip_size_kb": 64, 00:18:12.951 "state": "online", 00:18:12.951 "raid_level": "raid5f", 00:18:12.951 "superblock": true, 00:18:12.951 "num_base_bdevs": 4, 00:18:12.951 "num_base_bdevs_discovered": 4, 00:18:12.951 "num_base_bdevs_operational": 4, 00:18:12.951 "process": { 00:18:12.951 "type": "rebuild", 00:18:12.951 "target": "spare", 00:18:12.951 "progress": { 00:18:12.951 "blocks": 109440, 00:18:12.951 "percent": 57 00:18:12.951 } 00:18:12.951 }, 00:18:12.951 "base_bdevs_list": [ 00:18:12.951 { 00:18:12.951 "name": "spare", 00:18:12.951 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:12.951 "is_configured": true, 00:18:12.951 "data_offset": 2048, 00:18:12.951 "data_size": 63488 00:18:12.951 }, 00:18:12.951 { 00:18:12.951 "name": "BaseBdev2", 00:18:12.951 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:12.951 "is_configured": true, 00:18:12.951 "data_offset": 2048, 00:18:12.951 "data_size": 63488 00:18:12.951 }, 00:18:12.951 { 00:18:12.951 "name": "BaseBdev3", 00:18:12.951 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:12.951 "is_configured": true, 00:18:12.951 "data_offset": 2048, 00:18:12.951 "data_size": 63488 00:18:12.951 }, 00:18:12.951 { 00:18:12.951 "name": "BaseBdev4", 00:18:12.951 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:12.951 "is_configured": true, 00:18:12.951 "data_offset": 2048, 00:18:12.951 "data_size": 63488 00:18:12.951 } 00:18:12.951 ] 00:18:12.951 }' 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.951 10:12:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.327 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.327 "name": "raid_bdev1", 00:18:14.327 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:14.327 "strip_size_kb": 64, 00:18:14.327 "state": "online", 00:18:14.327 "raid_level": "raid5f", 00:18:14.327 "superblock": true, 00:18:14.327 "num_base_bdevs": 4, 00:18:14.327 "num_base_bdevs_discovered": 4, 00:18:14.327 "num_base_bdevs_operational": 4, 00:18:14.327 "process": { 00:18:14.327 "type": "rebuild", 00:18:14.327 "target": "spare", 00:18:14.327 "progress": { 00:18:14.327 "blocks": 132480, 00:18:14.327 "percent": 69 00:18:14.327 } 00:18:14.327 }, 00:18:14.327 "base_bdevs_list": [ 00:18:14.327 { 00:18:14.327 "name": "spare", 00:18:14.327 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:14.327 "is_configured": true, 00:18:14.327 "data_offset": 2048, 00:18:14.327 "data_size": 63488 00:18:14.327 }, 00:18:14.327 { 00:18:14.327 "name": "BaseBdev2", 00:18:14.327 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:14.327 "is_configured": true, 00:18:14.327 "data_offset": 2048, 00:18:14.327 "data_size": 63488 00:18:14.327 }, 00:18:14.327 { 00:18:14.327 "name": "BaseBdev3", 00:18:14.327 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:14.327 "is_configured": true, 00:18:14.327 "data_offset": 2048, 00:18:14.327 "data_size": 63488 00:18:14.327 }, 00:18:14.327 { 00:18:14.327 "name": "BaseBdev4", 00:18:14.327 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:14.328 "is_configured": true, 00:18:14.328 "data_offset": 2048, 00:18:14.328 "data_size": 63488 00:18:14.328 } 00:18:14.328 ] 00:18:14.328 }' 00:18:14.328 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.328 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.328 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.328 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.328 10:12:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.264 "name": "raid_bdev1", 00:18:15.264 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:15.264 "strip_size_kb": 64, 00:18:15.264 "state": "online", 00:18:15.264 "raid_level": "raid5f", 00:18:15.264 "superblock": true, 00:18:15.264 "num_base_bdevs": 4, 00:18:15.264 "num_base_bdevs_discovered": 4, 00:18:15.264 "num_base_bdevs_operational": 4, 00:18:15.264 "process": { 00:18:15.264 "type": "rebuild", 00:18:15.264 "target": "spare", 00:18:15.264 "progress": { 00:18:15.264 "blocks": 153600, 00:18:15.264 "percent": 80 00:18:15.264 } 00:18:15.264 }, 00:18:15.264 "base_bdevs_list": [ 00:18:15.264 { 00:18:15.264 "name": "spare", 00:18:15.264 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:15.264 "is_configured": true, 00:18:15.264 "data_offset": 2048, 00:18:15.264 "data_size": 63488 00:18:15.264 }, 00:18:15.264 { 00:18:15.264 "name": "BaseBdev2", 00:18:15.264 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:15.264 "is_configured": true, 00:18:15.264 "data_offset": 2048, 00:18:15.264 "data_size": 63488 00:18:15.264 }, 00:18:15.264 { 00:18:15.264 "name": "BaseBdev3", 00:18:15.264 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:15.264 "is_configured": true, 00:18:15.264 "data_offset": 2048, 00:18:15.264 "data_size": 63488 00:18:15.264 }, 00:18:15.264 { 00:18:15.264 "name": "BaseBdev4", 00:18:15.264 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:15.264 "is_configured": true, 00:18:15.264 "data_offset": 2048, 00:18:15.264 "data_size": 63488 00:18:15.264 } 00:18:15.264 ] 00:18:15.264 }' 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.264 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.522 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.522 10:12:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.460 "name": "raid_bdev1", 00:18:16.460 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:16.460 "strip_size_kb": 64, 00:18:16.460 "state": "online", 00:18:16.460 "raid_level": "raid5f", 00:18:16.460 "superblock": true, 00:18:16.460 "num_base_bdevs": 4, 00:18:16.460 "num_base_bdevs_discovered": 4, 00:18:16.460 "num_base_bdevs_operational": 4, 00:18:16.460 "process": { 00:18:16.460 "type": "rebuild", 00:18:16.460 "target": "spare", 00:18:16.460 "progress": { 00:18:16.460 "blocks": 176640, 00:18:16.460 "percent": 92 00:18:16.460 } 00:18:16.460 }, 00:18:16.460 "base_bdevs_list": [ 00:18:16.460 { 00:18:16.460 "name": "spare", 00:18:16.460 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:16.460 "is_configured": true, 00:18:16.460 "data_offset": 2048, 00:18:16.460 "data_size": 63488 00:18:16.460 }, 00:18:16.460 { 00:18:16.460 "name": "BaseBdev2", 00:18:16.460 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:16.460 "is_configured": true, 00:18:16.460 "data_offset": 2048, 00:18:16.460 "data_size": 63488 00:18:16.460 }, 00:18:16.460 { 00:18:16.460 "name": "BaseBdev3", 00:18:16.460 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:16.460 "is_configured": true, 00:18:16.460 "data_offset": 2048, 00:18:16.460 "data_size": 63488 00:18:16.460 }, 00:18:16.460 { 00:18:16.460 "name": "BaseBdev4", 00:18:16.460 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:16.460 "is_configured": true, 00:18:16.460 "data_offset": 2048, 00:18:16.460 "data_size": 63488 00:18:16.460 } 00:18:16.460 ] 00:18:16.460 }' 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.460 10:12:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.396 [2024-11-19 10:12:31.314908] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:17.396 [2024-11-19 10:12:31.315012] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:17.396 [2024-11-19 10:12:31.315220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.655 "name": "raid_bdev1", 00:18:17.655 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:17.655 "strip_size_kb": 64, 00:18:17.655 "state": "online", 00:18:17.655 "raid_level": "raid5f", 00:18:17.655 "superblock": true, 00:18:17.655 "num_base_bdevs": 4, 00:18:17.655 "num_base_bdevs_discovered": 4, 00:18:17.655 "num_base_bdevs_operational": 4, 00:18:17.655 "base_bdevs_list": [ 00:18:17.655 { 00:18:17.655 "name": "spare", 00:18:17.655 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:17.655 "is_configured": true, 00:18:17.655 "data_offset": 2048, 00:18:17.655 "data_size": 63488 00:18:17.655 }, 00:18:17.655 { 00:18:17.655 "name": "BaseBdev2", 00:18:17.655 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:17.655 "is_configured": true, 00:18:17.655 "data_offset": 2048, 00:18:17.655 "data_size": 63488 00:18:17.655 }, 00:18:17.655 { 00:18:17.655 "name": "BaseBdev3", 00:18:17.655 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:17.655 "is_configured": true, 00:18:17.655 "data_offset": 2048, 00:18:17.655 "data_size": 63488 00:18:17.655 }, 00:18:17.655 { 00:18:17.655 "name": "BaseBdev4", 00:18:17.655 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:17.655 "is_configured": true, 00:18:17.655 "data_offset": 2048, 00:18:17.655 "data_size": 63488 00:18:17.655 } 00:18:17.655 ] 00:18:17.655 }' 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.655 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.656 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.656 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.656 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.656 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.656 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.914 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.914 "name": "raid_bdev1", 00:18:17.914 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:17.914 "strip_size_kb": 64, 00:18:17.914 "state": "online", 00:18:17.914 "raid_level": "raid5f", 00:18:17.914 "superblock": true, 00:18:17.914 "num_base_bdevs": 4, 00:18:17.914 "num_base_bdevs_discovered": 4, 00:18:17.914 "num_base_bdevs_operational": 4, 00:18:17.914 "base_bdevs_list": [ 00:18:17.914 { 00:18:17.914 "name": "spare", 00:18:17.914 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:17.914 "is_configured": true, 00:18:17.914 "data_offset": 2048, 00:18:17.914 "data_size": 63488 00:18:17.914 }, 00:18:17.914 { 00:18:17.914 "name": "BaseBdev2", 00:18:17.914 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:17.914 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 }, 00:18:17.915 { 00:18:17.915 "name": "BaseBdev3", 00:18:17.915 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:17.915 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 }, 00:18:17.915 { 00:18:17.915 "name": "BaseBdev4", 00:18:17.915 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:17.915 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 } 00:18:17.915 ] 00:18:17.915 }' 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.915 10:12:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.915 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.915 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.915 "name": "raid_bdev1", 00:18:17.915 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:17.915 "strip_size_kb": 64, 00:18:17.915 "state": "online", 00:18:17.915 "raid_level": "raid5f", 00:18:17.915 "superblock": true, 00:18:17.915 "num_base_bdevs": 4, 00:18:17.915 "num_base_bdevs_discovered": 4, 00:18:17.915 "num_base_bdevs_operational": 4, 00:18:17.915 "base_bdevs_list": [ 00:18:17.915 { 00:18:17.915 "name": "spare", 00:18:17.915 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:17.915 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 }, 00:18:17.915 { 00:18:17.915 "name": "BaseBdev2", 00:18:17.915 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:17.915 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 }, 00:18:17.915 { 00:18:17.915 "name": "BaseBdev3", 00:18:17.915 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:17.915 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 }, 00:18:17.915 { 00:18:17.915 "name": "BaseBdev4", 00:18:17.915 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:17.915 "is_configured": true, 00:18:17.915 "data_offset": 2048, 00:18:17.915 "data_size": 63488 00:18:17.915 } 00:18:17.915 ] 00:18:17.915 }' 00:18:17.915 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.915 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.482 [2024-11-19 10:12:32.555249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.482 [2024-11-19 10:12:32.555300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.482 [2024-11-19 10:12:32.555584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.482 [2024-11-19 10:12:32.555823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.482 [2024-11-19 10:12:32.555856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.482 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:18.741 /dev/nbd0 00:18:18.741 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.999 1+0 records in 00:18:18.999 1+0 records out 00:18:18.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448088 s, 9.1 MB/s 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.999 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.000 10:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:19.258 /dev/nbd1 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.258 1+0 records in 00:18:19.258 1+0 records out 00:18:19.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472581 s, 8.7 MB/s 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.258 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.517 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.776 10:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.034 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.034 [2024-11-19 10:12:34.189300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.034 [2024-11-19 10:12:34.189388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.034 [2024-11-19 10:12:34.189429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:20.034 [2024-11-19 10:12:34.189447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.034 [2024-11-19 10:12:34.192851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.034 [2024-11-19 10:12:34.192896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.034 [2024-11-19 10:12:34.193003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:20.034 [2024-11-19 10:12:34.193079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.034 [2024-11-19 10:12:34.193341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.034 [2024-11-19 10:12:34.193537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.034 [2024-11-19 10:12:34.193649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:20.034 spare 00:18:20.035 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.035 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:20.035 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.035 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.292 [2024-11-19 10:12:34.293897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:20.292 [2024-11-19 10:12:34.293997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:20.292 [2024-11-19 10:12:34.294662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:20.292 [2024-11-19 10:12:34.306179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:20.292 [2024-11-19 10:12:34.306224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:20.292 [2024-11-19 10:12:34.306677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.292 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.293 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.293 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.293 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.293 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.293 "name": "raid_bdev1", 00:18:20.293 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:20.293 "strip_size_kb": 64, 00:18:20.293 "state": "online", 00:18:20.293 "raid_level": "raid5f", 00:18:20.293 "superblock": true, 00:18:20.293 "num_base_bdevs": 4, 00:18:20.293 "num_base_bdevs_discovered": 4, 00:18:20.293 "num_base_bdevs_operational": 4, 00:18:20.293 "base_bdevs_list": [ 00:18:20.293 { 00:18:20.293 "name": "spare", 00:18:20.293 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:20.293 "is_configured": true, 00:18:20.293 "data_offset": 2048, 00:18:20.293 "data_size": 63488 00:18:20.293 }, 00:18:20.293 { 00:18:20.293 "name": "BaseBdev2", 00:18:20.293 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:20.293 "is_configured": true, 00:18:20.293 "data_offset": 2048, 00:18:20.293 "data_size": 63488 00:18:20.293 }, 00:18:20.293 { 00:18:20.293 "name": "BaseBdev3", 00:18:20.293 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:20.293 "is_configured": true, 00:18:20.293 "data_offset": 2048, 00:18:20.293 "data_size": 63488 00:18:20.293 }, 00:18:20.293 { 00:18:20.293 "name": "BaseBdev4", 00:18:20.293 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:20.293 "is_configured": true, 00:18:20.293 "data_offset": 2048, 00:18:20.293 "data_size": 63488 00:18:20.293 } 00:18:20.293 ] 00:18:20.293 }' 00:18:20.293 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.293 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.860 "name": "raid_bdev1", 00:18:20.860 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:20.860 "strip_size_kb": 64, 00:18:20.860 "state": "online", 00:18:20.860 "raid_level": "raid5f", 00:18:20.860 "superblock": true, 00:18:20.860 "num_base_bdevs": 4, 00:18:20.860 "num_base_bdevs_discovered": 4, 00:18:20.860 "num_base_bdevs_operational": 4, 00:18:20.860 "base_bdevs_list": [ 00:18:20.860 { 00:18:20.860 "name": "spare", 00:18:20.860 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 }, 00:18:20.860 { 00:18:20.860 "name": "BaseBdev2", 00:18:20.860 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 }, 00:18:20.860 { 00:18:20.860 "name": "BaseBdev3", 00:18:20.860 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 }, 00:18:20.860 { 00:18:20.860 "name": "BaseBdev4", 00:18:20.860 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 } 00:18:20.860 ] 00:18:20.860 }' 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.860 10:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.860 [2024-11-19 10:12:35.026774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.860 "name": "raid_bdev1", 00:18:20.860 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:20.860 "strip_size_kb": 64, 00:18:20.860 "state": "online", 00:18:20.860 "raid_level": "raid5f", 00:18:20.860 "superblock": true, 00:18:20.860 "num_base_bdevs": 4, 00:18:20.860 "num_base_bdevs_discovered": 3, 00:18:20.860 "num_base_bdevs_operational": 3, 00:18:20.860 "base_bdevs_list": [ 00:18:20.860 { 00:18:20.860 "name": null, 00:18:20.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.860 "is_configured": false, 00:18:20.860 "data_offset": 0, 00:18:20.860 "data_size": 63488 00:18:20.860 }, 00:18:20.860 { 00:18:20.860 "name": "BaseBdev2", 00:18:20.860 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 }, 00:18:20.860 { 00:18:20.860 "name": "BaseBdev3", 00:18:20.860 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 }, 00:18:20.860 { 00:18:20.860 "name": "BaseBdev4", 00:18:20.860 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:20.860 "is_configured": true, 00:18:20.860 "data_offset": 2048, 00:18:20.860 "data_size": 63488 00:18:20.860 } 00:18:20.860 ] 00:18:20.860 }' 00:18:20.860 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.120 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.379 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.379 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.379 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.379 [2024-11-19 10:12:35.559036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.379 [2024-11-19 10:12:35.559328] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.379 [2024-11-19 10:12:35.559358] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:21.379 [2024-11-19 10:12:35.559409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.379 [2024-11-19 10:12:35.573829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:21.379 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.379 10:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:21.379 [2024-11-19 10:12:35.583165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.756 "name": "raid_bdev1", 00:18:22.756 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:22.756 "strip_size_kb": 64, 00:18:22.756 "state": "online", 00:18:22.756 "raid_level": "raid5f", 00:18:22.756 "superblock": true, 00:18:22.756 "num_base_bdevs": 4, 00:18:22.756 "num_base_bdevs_discovered": 4, 00:18:22.756 "num_base_bdevs_operational": 4, 00:18:22.756 "process": { 00:18:22.756 "type": "rebuild", 00:18:22.756 "target": "spare", 00:18:22.756 "progress": { 00:18:22.756 "blocks": 17280, 00:18:22.756 "percent": 9 00:18:22.756 } 00:18:22.756 }, 00:18:22.756 "base_bdevs_list": [ 00:18:22.756 { 00:18:22.756 "name": "spare", 00:18:22.756 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:22.756 "is_configured": true, 00:18:22.756 "data_offset": 2048, 00:18:22.756 "data_size": 63488 00:18:22.756 }, 00:18:22.756 { 00:18:22.756 "name": "BaseBdev2", 00:18:22.756 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:22.756 "is_configured": true, 00:18:22.756 "data_offset": 2048, 00:18:22.756 "data_size": 63488 00:18:22.756 }, 00:18:22.756 { 00:18:22.756 "name": "BaseBdev3", 00:18:22.756 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:22.756 "is_configured": true, 00:18:22.756 "data_offset": 2048, 00:18:22.756 "data_size": 63488 00:18:22.756 }, 00:18:22.756 { 00:18:22.756 "name": "BaseBdev4", 00:18:22.756 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:22.756 "is_configured": true, 00:18:22.756 "data_offset": 2048, 00:18:22.756 "data_size": 63488 00:18:22.756 } 00:18:22.756 ] 00:18:22.756 }' 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.756 [2024-11-19 10:12:36.757673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.756 [2024-11-19 10:12:36.797146] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.756 [2024-11-19 10:12:36.797240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.756 [2024-11-19 10:12:36.797268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.756 [2024-11-19 10:12:36.797287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.756 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.757 "name": "raid_bdev1", 00:18:22.757 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:22.757 "strip_size_kb": 64, 00:18:22.757 "state": "online", 00:18:22.757 "raid_level": "raid5f", 00:18:22.757 "superblock": true, 00:18:22.757 "num_base_bdevs": 4, 00:18:22.757 "num_base_bdevs_discovered": 3, 00:18:22.757 "num_base_bdevs_operational": 3, 00:18:22.757 "base_bdevs_list": [ 00:18:22.757 { 00:18:22.757 "name": null, 00:18:22.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.757 "is_configured": false, 00:18:22.757 "data_offset": 0, 00:18:22.757 "data_size": 63488 00:18:22.757 }, 00:18:22.757 { 00:18:22.757 "name": "BaseBdev2", 00:18:22.757 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:22.757 "is_configured": true, 00:18:22.757 "data_offset": 2048, 00:18:22.757 "data_size": 63488 00:18:22.757 }, 00:18:22.757 { 00:18:22.757 "name": "BaseBdev3", 00:18:22.757 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:22.757 "is_configured": true, 00:18:22.757 "data_offset": 2048, 00:18:22.757 "data_size": 63488 00:18:22.757 }, 00:18:22.757 { 00:18:22.757 "name": "BaseBdev4", 00:18:22.757 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:22.757 "is_configured": true, 00:18:22.757 "data_offset": 2048, 00:18:22.757 "data_size": 63488 00:18:22.757 } 00:18:22.757 ] 00:18:22.757 }' 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.757 10:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.325 10:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.325 10:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.325 10:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.325 [2024-11-19 10:12:37.343896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.325 [2024-11-19 10:12:37.343990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.325 [2024-11-19 10:12:37.344038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:23.325 [2024-11-19 10:12:37.344059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.325 [2024-11-19 10:12:37.344765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.325 [2024-11-19 10:12:37.344828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.325 [2024-11-19 10:12:37.344970] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:23.325 [2024-11-19 10:12:37.345017] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.325 [2024-11-19 10:12:37.345034] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:23.325 [2024-11-19 10:12:37.345071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.325 [2024-11-19 10:12:37.359685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:23.325 spare 00:18:23.325 10:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.325 10:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:23.325 [2024-11-19 10:12:37.369202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.262 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.262 "name": "raid_bdev1", 00:18:24.262 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:24.262 "strip_size_kb": 64, 00:18:24.262 "state": "online", 00:18:24.262 "raid_level": "raid5f", 00:18:24.262 "superblock": true, 00:18:24.262 "num_base_bdevs": 4, 00:18:24.262 "num_base_bdevs_discovered": 4, 00:18:24.262 "num_base_bdevs_operational": 4, 00:18:24.262 "process": { 00:18:24.262 "type": "rebuild", 00:18:24.262 "target": "spare", 00:18:24.262 "progress": { 00:18:24.262 "blocks": 17280, 00:18:24.262 "percent": 9 00:18:24.262 } 00:18:24.262 }, 00:18:24.262 "base_bdevs_list": [ 00:18:24.262 { 00:18:24.262 "name": "spare", 00:18:24.262 "uuid": "91692bd0-4b9e-5eb7-b9ab-e47b492c2e7d", 00:18:24.262 "is_configured": true, 00:18:24.262 "data_offset": 2048, 00:18:24.262 "data_size": 63488 00:18:24.262 }, 00:18:24.262 { 00:18:24.262 "name": "BaseBdev2", 00:18:24.262 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:24.262 "is_configured": true, 00:18:24.262 "data_offset": 2048, 00:18:24.262 "data_size": 63488 00:18:24.262 }, 00:18:24.263 { 00:18:24.263 "name": "BaseBdev3", 00:18:24.263 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:24.263 "is_configured": true, 00:18:24.263 "data_offset": 2048, 00:18:24.263 "data_size": 63488 00:18:24.263 }, 00:18:24.263 { 00:18:24.263 "name": "BaseBdev4", 00:18:24.263 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:24.263 "is_configured": true, 00:18:24.263 "data_offset": 2048, 00:18:24.263 "data_size": 63488 00:18:24.263 } 00:18:24.263 ] 00:18:24.263 }' 00:18:24.263 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.263 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.263 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.521 [2024-11-19 10:12:38.535383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.521 [2024-11-19 10:12:38.583533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:24.521 [2024-11-19 10:12:38.583631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.521 [2024-11-19 10:12:38.583666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.521 [2024-11-19 10:12:38.583679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.521 "name": "raid_bdev1", 00:18:24.521 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:24.521 "strip_size_kb": 64, 00:18:24.521 "state": "online", 00:18:24.521 "raid_level": "raid5f", 00:18:24.521 "superblock": true, 00:18:24.521 "num_base_bdevs": 4, 00:18:24.521 "num_base_bdevs_discovered": 3, 00:18:24.521 "num_base_bdevs_operational": 3, 00:18:24.521 "base_bdevs_list": [ 00:18:24.521 { 00:18:24.521 "name": null, 00:18:24.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.521 "is_configured": false, 00:18:24.521 "data_offset": 0, 00:18:24.521 "data_size": 63488 00:18:24.521 }, 00:18:24.521 { 00:18:24.521 "name": "BaseBdev2", 00:18:24.521 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:24.521 "is_configured": true, 00:18:24.521 "data_offset": 2048, 00:18:24.521 "data_size": 63488 00:18:24.521 }, 00:18:24.521 { 00:18:24.521 "name": "BaseBdev3", 00:18:24.521 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:24.521 "is_configured": true, 00:18:24.521 "data_offset": 2048, 00:18:24.521 "data_size": 63488 00:18:24.521 }, 00:18:24.521 { 00:18:24.521 "name": "BaseBdev4", 00:18:24.521 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:24.521 "is_configured": true, 00:18:24.521 "data_offset": 2048, 00:18:24.521 "data_size": 63488 00:18:24.521 } 00:18:24.521 ] 00:18:24.521 }' 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.521 10:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.092 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.092 "name": "raid_bdev1", 00:18:25.092 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:25.092 "strip_size_kb": 64, 00:18:25.092 "state": "online", 00:18:25.092 "raid_level": "raid5f", 00:18:25.092 "superblock": true, 00:18:25.092 "num_base_bdevs": 4, 00:18:25.092 "num_base_bdevs_discovered": 3, 00:18:25.092 "num_base_bdevs_operational": 3, 00:18:25.093 "base_bdevs_list": [ 00:18:25.093 { 00:18:25.093 "name": null, 00:18:25.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.093 "is_configured": false, 00:18:25.093 "data_offset": 0, 00:18:25.093 "data_size": 63488 00:18:25.093 }, 00:18:25.093 { 00:18:25.093 "name": "BaseBdev2", 00:18:25.093 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:25.093 "is_configured": true, 00:18:25.093 "data_offset": 2048, 00:18:25.093 "data_size": 63488 00:18:25.093 }, 00:18:25.093 { 00:18:25.093 "name": "BaseBdev3", 00:18:25.093 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:25.093 "is_configured": true, 00:18:25.093 "data_offset": 2048, 00:18:25.093 "data_size": 63488 00:18:25.093 }, 00:18:25.093 { 00:18:25.093 "name": "BaseBdev4", 00:18:25.093 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:25.093 "is_configured": true, 00:18:25.093 "data_offset": 2048, 00:18:25.093 "data_size": 63488 00:18:25.093 } 00:18:25.093 ] 00:18:25.093 }' 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.093 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.093 [2024-11-19 10:12:39.297828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:25.093 [2024-11-19 10:12:39.298049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.094 [2024-11-19 10:12:39.298098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:25.094 [2024-11-19 10:12:39.298116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.094 [2024-11-19 10:12:39.298823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.094 [2024-11-19 10:12:39.298884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:25.094 [2024-11-19 10:12:39.299005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:25.094 [2024-11-19 10:12:39.299029] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:25.094 [2024-11-19 10:12:39.299045] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:25.094 [2024-11-19 10:12:39.299060] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:25.094 BaseBdev1 00:18:25.094 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.094 10:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.482 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.482 "name": "raid_bdev1", 00:18:26.482 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:26.482 "strip_size_kb": 64, 00:18:26.482 "state": "online", 00:18:26.482 "raid_level": "raid5f", 00:18:26.482 "superblock": true, 00:18:26.482 "num_base_bdevs": 4, 00:18:26.482 "num_base_bdevs_discovered": 3, 00:18:26.482 "num_base_bdevs_operational": 3, 00:18:26.482 "base_bdevs_list": [ 00:18:26.482 { 00:18:26.482 "name": null, 00:18:26.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.482 "is_configured": false, 00:18:26.482 "data_offset": 0, 00:18:26.482 "data_size": 63488 00:18:26.482 }, 00:18:26.482 { 00:18:26.482 "name": "BaseBdev2", 00:18:26.482 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:26.482 "is_configured": true, 00:18:26.482 "data_offset": 2048, 00:18:26.482 "data_size": 63488 00:18:26.482 }, 00:18:26.482 { 00:18:26.482 "name": "BaseBdev3", 00:18:26.482 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:26.482 "is_configured": true, 00:18:26.482 "data_offset": 2048, 00:18:26.482 "data_size": 63488 00:18:26.482 }, 00:18:26.482 { 00:18:26.483 "name": "BaseBdev4", 00:18:26.483 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:26.483 "is_configured": true, 00:18:26.483 "data_offset": 2048, 00:18:26.483 "data_size": 63488 00:18:26.483 } 00:18:26.483 ] 00:18:26.483 }' 00:18:26.483 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.483 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.776 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.776 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.777 "name": "raid_bdev1", 00:18:26.777 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:26.777 "strip_size_kb": 64, 00:18:26.777 "state": "online", 00:18:26.777 "raid_level": "raid5f", 00:18:26.777 "superblock": true, 00:18:26.777 "num_base_bdevs": 4, 00:18:26.777 "num_base_bdevs_discovered": 3, 00:18:26.777 "num_base_bdevs_operational": 3, 00:18:26.777 "base_bdevs_list": [ 00:18:26.777 { 00:18:26.777 "name": null, 00:18:26.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.777 "is_configured": false, 00:18:26.777 "data_offset": 0, 00:18:26.777 "data_size": 63488 00:18:26.777 }, 00:18:26.777 { 00:18:26.777 "name": "BaseBdev2", 00:18:26.777 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:26.777 "is_configured": true, 00:18:26.777 "data_offset": 2048, 00:18:26.777 "data_size": 63488 00:18:26.777 }, 00:18:26.777 { 00:18:26.777 "name": "BaseBdev3", 00:18:26.777 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:26.777 "is_configured": true, 00:18:26.777 "data_offset": 2048, 00:18:26.777 "data_size": 63488 00:18:26.777 }, 00:18:26.777 { 00:18:26.777 "name": "BaseBdev4", 00:18:26.777 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:26.777 "is_configured": true, 00:18:26.777 "data_offset": 2048, 00:18:26.777 "data_size": 63488 00:18:26.777 } 00:18:26.777 ] 00:18:26.777 }' 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.777 10:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.777 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.035 [2024-11-19 10:12:41.014493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.035 [2024-11-19 10:12:41.014743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.035 [2024-11-19 10:12:41.014767] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:27.035 request: 00:18:27.035 { 00:18:27.035 "base_bdev": "BaseBdev1", 00:18:27.035 "raid_bdev": "raid_bdev1", 00:18:27.035 "method": "bdev_raid_add_base_bdev", 00:18:27.035 "req_id": 1 00:18:27.035 } 00:18:27.035 Got JSON-RPC error response 00:18:27.035 response: 00:18:27.035 { 00:18:27.035 "code": -22, 00:18:27.035 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:27.035 } 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.035 10:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.970 "name": "raid_bdev1", 00:18:27.970 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:27.970 "strip_size_kb": 64, 00:18:27.970 "state": "online", 00:18:27.970 "raid_level": "raid5f", 00:18:27.970 "superblock": true, 00:18:27.970 "num_base_bdevs": 4, 00:18:27.970 "num_base_bdevs_discovered": 3, 00:18:27.970 "num_base_bdevs_operational": 3, 00:18:27.970 "base_bdevs_list": [ 00:18:27.970 { 00:18:27.970 "name": null, 00:18:27.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.970 "is_configured": false, 00:18:27.970 "data_offset": 0, 00:18:27.970 "data_size": 63488 00:18:27.970 }, 00:18:27.970 { 00:18:27.970 "name": "BaseBdev2", 00:18:27.970 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:27.970 "is_configured": true, 00:18:27.970 "data_offset": 2048, 00:18:27.970 "data_size": 63488 00:18:27.970 }, 00:18:27.970 { 00:18:27.970 "name": "BaseBdev3", 00:18:27.970 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:27.970 "is_configured": true, 00:18:27.970 "data_offset": 2048, 00:18:27.970 "data_size": 63488 00:18:27.970 }, 00:18:27.970 { 00:18:27.970 "name": "BaseBdev4", 00:18:27.970 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:27.970 "is_configured": true, 00:18:27.970 "data_offset": 2048, 00:18:27.970 "data_size": 63488 00:18:27.970 } 00:18:27.970 ] 00:18:27.970 }' 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.970 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.537 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.537 "name": "raid_bdev1", 00:18:28.537 "uuid": "2d501056-6f3f-4444-adf7-f97c788395eb", 00:18:28.537 "strip_size_kb": 64, 00:18:28.537 "state": "online", 00:18:28.537 "raid_level": "raid5f", 00:18:28.537 "superblock": true, 00:18:28.537 "num_base_bdevs": 4, 00:18:28.537 "num_base_bdevs_discovered": 3, 00:18:28.537 "num_base_bdevs_operational": 3, 00:18:28.537 "base_bdevs_list": [ 00:18:28.537 { 00:18:28.537 "name": null, 00:18:28.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.537 "is_configured": false, 00:18:28.537 "data_offset": 0, 00:18:28.537 "data_size": 63488 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev2", 00:18:28.537 "uuid": "d7e6615c-dfd8-5617-9767-cee9a55ca415", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 2048, 00:18:28.537 "data_size": 63488 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev3", 00:18:28.537 "uuid": "2625bc0d-341e-5fa8-834b-8e335760c2c6", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 2048, 00:18:28.537 "data_size": 63488 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev4", 00:18:28.537 "uuid": "556d7e75-0db5-5b5b-a4d5-8af328f8e85e", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 2048, 00:18:28.538 "data_size": 63488 00:18:28.538 } 00:18:28.538 ] 00:18:28.538 }' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85522 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85522 ']' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85522 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85522 00:18:28.538 killing process with pid 85522 00:18:28.538 Received shutdown signal, test time was about 60.000000 seconds 00:18:28.538 00:18:28.538 Latency(us) 00:18:28.538 [2024-11-19T10:12:42.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.538 [2024-11-19T10:12:42.770Z] =================================================================================================================== 00:18:28.538 [2024-11-19T10:12:42.770Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85522' 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85522 00:18:28.538 [2024-11-19 10:12:42.743974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.538 10:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85522 00:18:28.538 [2024-11-19 10:12:42.744211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.538 [2024-11-19 10:12:42.744378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.538 [2024-11-19 10:12:42.744410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:29.106 [2024-11-19 10:12:43.180116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.486 10:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:30.487 00:18:30.487 real 0m29.109s 00:18:30.487 user 0m37.916s 00:18:30.487 sys 0m3.053s 00:18:30.487 10:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.487 10:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.487 ************************************ 00:18:30.487 END TEST raid5f_rebuild_test_sb 00:18:30.487 ************************************ 00:18:30.487 10:12:44 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:30.487 10:12:44 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:30.487 10:12:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:30.487 10:12:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.487 10:12:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.487 ************************************ 00:18:30.487 START TEST raid_state_function_test_sb_4k 00:18:30.487 ************************************ 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86347 00:18:30.487 Process raid pid: 86347 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86347' 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86347 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86347 ']' 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.487 10:12:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.487 [2024-11-19 10:12:44.490937] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:30.487 [2024-11-19 10:12:44.491148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.487 [2024-11-19 10:12:44.689077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.746 [2024-11-19 10:12:44.860546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.005 [2024-11-19 10:12:45.094405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.005 [2024-11-19 10:12:45.094475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.265 [2024-11-19 10:12:45.470609] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.265 [2024-11-19 10:12:45.470691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.265 [2024-11-19 10:12:45.470710] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.265 [2024-11-19 10:12:45.470727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.265 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.523 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.523 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.523 "name": "Existed_Raid", 00:18:31.523 "uuid": "2eb57089-ecdf-40d3-80a7-ae89fd701794", 00:18:31.523 "strip_size_kb": 0, 00:18:31.523 "state": "configuring", 00:18:31.523 "raid_level": "raid1", 00:18:31.523 "superblock": true, 00:18:31.523 "num_base_bdevs": 2, 00:18:31.523 "num_base_bdevs_discovered": 0, 00:18:31.523 "num_base_bdevs_operational": 2, 00:18:31.523 "base_bdevs_list": [ 00:18:31.523 { 00:18:31.523 "name": "BaseBdev1", 00:18:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.523 "is_configured": false, 00:18:31.523 "data_offset": 0, 00:18:31.523 "data_size": 0 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "name": "BaseBdev2", 00:18:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.523 "is_configured": false, 00:18:31.523 "data_offset": 0, 00:18:31.523 "data_size": 0 00:18:31.523 } 00:18:31.523 ] 00:18:31.523 }' 00:18:31.523 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.524 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.781 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:31.782 10:12:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.782 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.782 [2024-11-19 10:12:46.006698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.782 [2024-11-19 10:12:46.006898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:31.782 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.782 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:31.782 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.782 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.041 [2024-11-19 10:12:46.014674] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.041 [2024-11-19 10:12:46.014892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.041 [2024-11-19 10:12:46.014920] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.041 [2024-11-19 10:12:46.014943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.041 [2024-11-19 10:12:46.065577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.041 BaseBdev1 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.041 [ 00:18:32.041 { 00:18:32.041 "name": "BaseBdev1", 00:18:32.041 "aliases": [ 00:18:32.041 "7bc3c04b-eb26-4780-a939-eb81d23487bd" 00:18:32.041 ], 00:18:32.041 "product_name": "Malloc disk", 00:18:32.041 "block_size": 4096, 00:18:32.041 "num_blocks": 8192, 00:18:32.041 "uuid": "7bc3c04b-eb26-4780-a939-eb81d23487bd", 00:18:32.041 "assigned_rate_limits": { 00:18:32.041 "rw_ios_per_sec": 0, 00:18:32.041 "rw_mbytes_per_sec": 0, 00:18:32.041 "r_mbytes_per_sec": 0, 00:18:32.041 "w_mbytes_per_sec": 0 00:18:32.041 }, 00:18:32.041 "claimed": true, 00:18:32.041 "claim_type": "exclusive_write", 00:18:32.041 "zoned": false, 00:18:32.041 "supported_io_types": { 00:18:32.041 "read": true, 00:18:32.041 "write": true, 00:18:32.041 "unmap": true, 00:18:32.041 "flush": true, 00:18:32.041 "reset": true, 00:18:32.041 "nvme_admin": false, 00:18:32.041 "nvme_io": false, 00:18:32.041 "nvme_io_md": false, 00:18:32.041 "write_zeroes": true, 00:18:32.041 "zcopy": true, 00:18:32.041 "get_zone_info": false, 00:18:32.041 "zone_management": false, 00:18:32.041 "zone_append": false, 00:18:32.041 "compare": false, 00:18:32.041 "compare_and_write": false, 00:18:32.041 "abort": true, 00:18:32.041 "seek_hole": false, 00:18:32.041 "seek_data": false, 00:18:32.041 "copy": true, 00:18:32.041 "nvme_iov_md": false 00:18:32.041 }, 00:18:32.041 "memory_domains": [ 00:18:32.041 { 00:18:32.041 "dma_device_id": "system", 00:18:32.041 "dma_device_type": 1 00:18:32.041 }, 00:18:32.041 { 00:18:32.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.041 "dma_device_type": 2 00:18:32.041 } 00:18:32.041 ], 00:18:32.041 "driver_specific": {} 00:18:32.041 } 00:18:32.041 ] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.041 "name": "Existed_Raid", 00:18:32.041 "uuid": "cdc15a00-1e74-4c03-a3b9-07d9469c8d34", 00:18:32.041 "strip_size_kb": 0, 00:18:32.041 "state": "configuring", 00:18:32.041 "raid_level": "raid1", 00:18:32.041 "superblock": true, 00:18:32.041 "num_base_bdevs": 2, 00:18:32.041 "num_base_bdevs_discovered": 1, 00:18:32.041 "num_base_bdevs_operational": 2, 00:18:32.041 "base_bdevs_list": [ 00:18:32.041 { 00:18:32.041 "name": "BaseBdev1", 00:18:32.041 "uuid": "7bc3c04b-eb26-4780-a939-eb81d23487bd", 00:18:32.041 "is_configured": true, 00:18:32.041 "data_offset": 256, 00:18:32.041 "data_size": 7936 00:18:32.041 }, 00:18:32.041 { 00:18:32.041 "name": "BaseBdev2", 00:18:32.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.041 "is_configured": false, 00:18:32.041 "data_offset": 0, 00:18:32.041 "data_size": 0 00:18:32.041 } 00:18:32.041 ] 00:18:32.041 }' 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.041 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.608 [2024-11-19 10:12:46.613783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.608 [2024-11-19 10:12:46.614023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.608 [2024-11-19 10:12:46.625865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.608 [2024-11-19 10:12:46.628760] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.608 [2024-11-19 10:12:46.628847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.608 "name": "Existed_Raid", 00:18:32.608 "uuid": "4d8659f0-ca33-45aa-a372-88c887b90f5a", 00:18:32.608 "strip_size_kb": 0, 00:18:32.608 "state": "configuring", 00:18:32.608 "raid_level": "raid1", 00:18:32.608 "superblock": true, 00:18:32.608 "num_base_bdevs": 2, 00:18:32.608 "num_base_bdevs_discovered": 1, 00:18:32.608 "num_base_bdevs_operational": 2, 00:18:32.608 "base_bdevs_list": [ 00:18:32.608 { 00:18:32.608 "name": "BaseBdev1", 00:18:32.608 "uuid": "7bc3c04b-eb26-4780-a939-eb81d23487bd", 00:18:32.608 "is_configured": true, 00:18:32.608 "data_offset": 256, 00:18:32.608 "data_size": 7936 00:18:32.608 }, 00:18:32.608 { 00:18:32.608 "name": "BaseBdev2", 00:18:32.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.608 "is_configured": false, 00:18:32.608 "data_offset": 0, 00:18:32.608 "data_size": 0 00:18:32.608 } 00:18:32.608 ] 00:18:32.608 }' 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.608 10:12:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.176 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:33.176 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.176 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.176 [2024-11-19 10:12:47.176973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.176 [2024-11-19 10:12:47.177575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:33.176 [2024-11-19 10:12:47.177713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.176 BaseBdev2 00:18:33.176 [2024-11-19 10:12:47.178123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:33.176 [2024-11-19 10:12:47.178377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:33.177 [2024-11-19 10:12:47.178399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:33.177 [2024-11-19 10:12:47.178603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.177 [ 00:18:33.177 { 00:18:33.177 "name": "BaseBdev2", 00:18:33.177 "aliases": [ 00:18:33.177 "602c7fef-daed-4211-b648-80e2c8d445d3" 00:18:33.177 ], 00:18:33.177 "product_name": "Malloc disk", 00:18:33.177 "block_size": 4096, 00:18:33.177 "num_blocks": 8192, 00:18:33.177 "uuid": "602c7fef-daed-4211-b648-80e2c8d445d3", 00:18:33.177 "assigned_rate_limits": { 00:18:33.177 "rw_ios_per_sec": 0, 00:18:33.177 "rw_mbytes_per_sec": 0, 00:18:33.177 "r_mbytes_per_sec": 0, 00:18:33.177 "w_mbytes_per_sec": 0 00:18:33.177 }, 00:18:33.177 "claimed": true, 00:18:33.177 "claim_type": "exclusive_write", 00:18:33.177 "zoned": false, 00:18:33.177 "supported_io_types": { 00:18:33.177 "read": true, 00:18:33.177 "write": true, 00:18:33.177 "unmap": true, 00:18:33.177 "flush": true, 00:18:33.177 "reset": true, 00:18:33.177 "nvme_admin": false, 00:18:33.177 "nvme_io": false, 00:18:33.177 "nvme_io_md": false, 00:18:33.177 "write_zeroes": true, 00:18:33.177 "zcopy": true, 00:18:33.177 "get_zone_info": false, 00:18:33.177 "zone_management": false, 00:18:33.177 "zone_append": false, 00:18:33.177 "compare": false, 00:18:33.177 "compare_and_write": false, 00:18:33.177 "abort": true, 00:18:33.177 "seek_hole": false, 00:18:33.177 "seek_data": false, 00:18:33.177 "copy": true, 00:18:33.177 "nvme_iov_md": false 00:18:33.177 }, 00:18:33.177 "memory_domains": [ 00:18:33.177 { 00:18:33.177 "dma_device_id": "system", 00:18:33.177 "dma_device_type": 1 00:18:33.177 }, 00:18:33.177 { 00:18:33.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.177 "dma_device_type": 2 00:18:33.177 } 00:18:33.177 ], 00:18:33.177 "driver_specific": {} 00:18:33.177 } 00:18:33.177 ] 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.177 "name": "Existed_Raid", 00:18:33.177 "uuid": "4d8659f0-ca33-45aa-a372-88c887b90f5a", 00:18:33.177 "strip_size_kb": 0, 00:18:33.177 "state": "online", 00:18:33.177 "raid_level": "raid1", 00:18:33.177 "superblock": true, 00:18:33.177 "num_base_bdevs": 2, 00:18:33.177 "num_base_bdevs_discovered": 2, 00:18:33.177 "num_base_bdevs_operational": 2, 00:18:33.177 "base_bdevs_list": [ 00:18:33.177 { 00:18:33.177 "name": "BaseBdev1", 00:18:33.177 "uuid": "7bc3c04b-eb26-4780-a939-eb81d23487bd", 00:18:33.177 "is_configured": true, 00:18:33.177 "data_offset": 256, 00:18:33.177 "data_size": 7936 00:18:33.177 }, 00:18:33.177 { 00:18:33.177 "name": "BaseBdev2", 00:18:33.177 "uuid": "602c7fef-daed-4211-b648-80e2c8d445d3", 00:18:33.177 "is_configured": true, 00:18:33.177 "data_offset": 256, 00:18:33.177 "data_size": 7936 00:18:33.177 } 00:18:33.177 ] 00:18:33.177 }' 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.177 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.744 [2024-11-19 10:12:47.745576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.744 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:33.744 "name": "Existed_Raid", 00:18:33.744 "aliases": [ 00:18:33.744 "4d8659f0-ca33-45aa-a372-88c887b90f5a" 00:18:33.744 ], 00:18:33.744 "product_name": "Raid Volume", 00:18:33.744 "block_size": 4096, 00:18:33.744 "num_blocks": 7936, 00:18:33.744 "uuid": "4d8659f0-ca33-45aa-a372-88c887b90f5a", 00:18:33.744 "assigned_rate_limits": { 00:18:33.744 "rw_ios_per_sec": 0, 00:18:33.744 "rw_mbytes_per_sec": 0, 00:18:33.744 "r_mbytes_per_sec": 0, 00:18:33.744 "w_mbytes_per_sec": 0 00:18:33.744 }, 00:18:33.744 "claimed": false, 00:18:33.744 "zoned": false, 00:18:33.744 "supported_io_types": { 00:18:33.744 "read": true, 00:18:33.744 "write": true, 00:18:33.744 "unmap": false, 00:18:33.744 "flush": false, 00:18:33.745 "reset": true, 00:18:33.745 "nvme_admin": false, 00:18:33.745 "nvme_io": false, 00:18:33.745 "nvme_io_md": false, 00:18:33.745 "write_zeroes": true, 00:18:33.745 "zcopy": false, 00:18:33.745 "get_zone_info": false, 00:18:33.745 "zone_management": false, 00:18:33.745 "zone_append": false, 00:18:33.745 "compare": false, 00:18:33.745 "compare_and_write": false, 00:18:33.745 "abort": false, 00:18:33.745 "seek_hole": false, 00:18:33.745 "seek_data": false, 00:18:33.745 "copy": false, 00:18:33.745 "nvme_iov_md": false 00:18:33.745 }, 00:18:33.745 "memory_domains": [ 00:18:33.745 { 00:18:33.745 "dma_device_id": "system", 00:18:33.745 "dma_device_type": 1 00:18:33.745 }, 00:18:33.745 { 00:18:33.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.745 "dma_device_type": 2 00:18:33.745 }, 00:18:33.745 { 00:18:33.745 "dma_device_id": "system", 00:18:33.745 "dma_device_type": 1 00:18:33.745 }, 00:18:33.745 { 00:18:33.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.745 "dma_device_type": 2 00:18:33.745 } 00:18:33.745 ], 00:18:33.745 "driver_specific": { 00:18:33.745 "raid": { 00:18:33.745 "uuid": "4d8659f0-ca33-45aa-a372-88c887b90f5a", 00:18:33.745 "strip_size_kb": 0, 00:18:33.745 "state": "online", 00:18:33.745 "raid_level": "raid1", 00:18:33.745 "superblock": true, 00:18:33.745 "num_base_bdevs": 2, 00:18:33.745 "num_base_bdevs_discovered": 2, 00:18:33.745 "num_base_bdevs_operational": 2, 00:18:33.745 "base_bdevs_list": [ 00:18:33.745 { 00:18:33.745 "name": "BaseBdev1", 00:18:33.745 "uuid": "7bc3c04b-eb26-4780-a939-eb81d23487bd", 00:18:33.745 "is_configured": true, 00:18:33.745 "data_offset": 256, 00:18:33.745 "data_size": 7936 00:18:33.745 }, 00:18:33.745 { 00:18:33.745 "name": "BaseBdev2", 00:18:33.745 "uuid": "602c7fef-daed-4211-b648-80e2c8d445d3", 00:18:33.745 "is_configured": true, 00:18:33.745 "data_offset": 256, 00:18:33.745 "data_size": 7936 00:18:33.745 } 00:18:33.745 ] 00:18:33.745 } 00:18:33.745 } 00:18:33.745 }' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:33.745 BaseBdev2' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.745 10:12:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.004 [2024-11-19 10:12:48.013334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:34.004 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.005 "name": "Existed_Raid", 00:18:34.005 "uuid": "4d8659f0-ca33-45aa-a372-88c887b90f5a", 00:18:34.005 "strip_size_kb": 0, 00:18:34.005 "state": "online", 00:18:34.005 "raid_level": "raid1", 00:18:34.005 "superblock": true, 00:18:34.005 "num_base_bdevs": 2, 00:18:34.005 "num_base_bdevs_discovered": 1, 00:18:34.005 "num_base_bdevs_operational": 1, 00:18:34.005 "base_bdevs_list": [ 00:18:34.005 { 00:18:34.005 "name": null, 00:18:34.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.005 "is_configured": false, 00:18:34.005 "data_offset": 0, 00:18:34.005 "data_size": 7936 00:18:34.005 }, 00:18:34.005 { 00:18:34.005 "name": "BaseBdev2", 00:18:34.005 "uuid": "602c7fef-daed-4211-b648-80e2c8d445d3", 00:18:34.005 "is_configured": true, 00:18:34.005 "data_offset": 256, 00:18:34.005 "data_size": 7936 00:18:34.005 } 00:18:34.005 ] 00:18:34.005 }' 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.005 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.572 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.572 [2024-11-19 10:12:48.711874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.572 [2024-11-19 10:12:48.712021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.831 [2024-11-19 10:12:48.806559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.831 [2024-11-19 10:12:48.806770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.832 [2024-11-19 10:12:48.806976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86347 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86347 ']' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86347 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86347 00:18:34.832 killing process with pid 86347 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86347' 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86347 00:18:34.832 [2024-11-19 10:12:48.898039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.832 10:12:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86347 00:18:34.832 [2024-11-19 10:12:48.914129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.207 10:12:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:36.207 00:18:36.207 real 0m5.706s 00:18:36.207 user 0m8.483s 00:18:36.207 sys 0m0.893s 00:18:36.207 10:12:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.207 ************************************ 00:18:36.207 END TEST raid_state_function_test_sb_4k 00:18:36.207 10:12:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.207 ************************************ 00:18:36.207 10:12:50 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:36.207 10:12:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:36.207 10:12:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.208 10:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.208 ************************************ 00:18:36.208 START TEST raid_superblock_test_4k 00:18:36.208 ************************************ 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:36.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86605 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86605 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86605 ']' 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.208 10:12:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.208 [2024-11-19 10:12:50.250138] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:36.208 [2024-11-19 10:12:50.250376] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86605 ] 00:18:36.467 [2024-11-19 10:12:50.442939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.467 [2024-11-19 10:12:50.594968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.725 [2024-11-19 10:12:50.828156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.725 [2024-11-19 10:12:50.828234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 malloc1 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.292 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 [2024-11-19 10:12:51.325248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.293 [2024-11-19 10:12:51.325466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.293 [2024-11-19 10:12:51.325546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:37.293 [2024-11-19 10:12:51.325671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.293 [2024-11-19 10:12:51.328808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.293 [2024-11-19 10:12:51.328971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.293 pt1 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.293 malloc2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.293 [2024-11-19 10:12:51.387409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.293 [2024-11-19 10:12:51.387481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.293 [2024-11-19 10:12:51.387522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:37.293 [2024-11-19 10:12:51.387551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.293 [2024-11-19 10:12:51.390699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.293 [2024-11-19 10:12:51.390742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.293 pt2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.293 [2024-11-19 10:12:51.399624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.293 [2024-11-19 10:12:51.402349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.293 [2024-11-19 10:12:51.402620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:37.293 [2024-11-19 10:12:51.402643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.293 [2024-11-19 10:12:51.403036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:37.293 [2024-11-19 10:12:51.403256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:37.293 [2024-11-19 10:12:51.403282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:37.293 [2024-11-19 10:12:51.403509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.293 "name": "raid_bdev1", 00:18:37.293 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:37.293 "strip_size_kb": 0, 00:18:37.293 "state": "online", 00:18:37.293 "raid_level": "raid1", 00:18:37.293 "superblock": true, 00:18:37.293 "num_base_bdevs": 2, 00:18:37.293 "num_base_bdevs_discovered": 2, 00:18:37.293 "num_base_bdevs_operational": 2, 00:18:37.293 "base_bdevs_list": [ 00:18:37.293 { 00:18:37.293 "name": "pt1", 00:18:37.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.293 "is_configured": true, 00:18:37.293 "data_offset": 256, 00:18:37.293 "data_size": 7936 00:18:37.293 }, 00:18:37.293 { 00:18:37.293 "name": "pt2", 00:18:37.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.293 "is_configured": true, 00:18:37.293 "data_offset": 256, 00:18:37.293 "data_size": 7936 00:18:37.293 } 00:18:37.293 ] 00:18:37.293 }' 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.293 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:37.861 [2024-11-19 10:12:51.884239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.861 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.861 "name": "raid_bdev1", 00:18:37.861 "aliases": [ 00:18:37.861 "7d966e6f-9302-4ebc-90b5-ab36944d0ed0" 00:18:37.861 ], 00:18:37.861 "product_name": "Raid Volume", 00:18:37.861 "block_size": 4096, 00:18:37.861 "num_blocks": 7936, 00:18:37.861 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:37.861 "assigned_rate_limits": { 00:18:37.861 "rw_ios_per_sec": 0, 00:18:37.861 "rw_mbytes_per_sec": 0, 00:18:37.861 "r_mbytes_per_sec": 0, 00:18:37.862 "w_mbytes_per_sec": 0 00:18:37.862 }, 00:18:37.862 "claimed": false, 00:18:37.862 "zoned": false, 00:18:37.862 "supported_io_types": { 00:18:37.862 "read": true, 00:18:37.862 "write": true, 00:18:37.862 "unmap": false, 00:18:37.862 "flush": false, 00:18:37.862 "reset": true, 00:18:37.862 "nvme_admin": false, 00:18:37.862 "nvme_io": false, 00:18:37.862 "nvme_io_md": false, 00:18:37.862 "write_zeroes": true, 00:18:37.862 "zcopy": false, 00:18:37.862 "get_zone_info": false, 00:18:37.862 "zone_management": false, 00:18:37.862 "zone_append": false, 00:18:37.862 "compare": false, 00:18:37.862 "compare_and_write": false, 00:18:37.862 "abort": false, 00:18:37.862 "seek_hole": false, 00:18:37.862 "seek_data": false, 00:18:37.862 "copy": false, 00:18:37.862 "nvme_iov_md": false 00:18:37.862 }, 00:18:37.862 "memory_domains": [ 00:18:37.862 { 00:18:37.862 "dma_device_id": "system", 00:18:37.862 "dma_device_type": 1 00:18:37.862 }, 00:18:37.862 { 00:18:37.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.862 "dma_device_type": 2 00:18:37.862 }, 00:18:37.862 { 00:18:37.862 "dma_device_id": "system", 00:18:37.862 "dma_device_type": 1 00:18:37.862 }, 00:18:37.862 { 00:18:37.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.862 "dma_device_type": 2 00:18:37.862 } 00:18:37.862 ], 00:18:37.862 "driver_specific": { 00:18:37.862 "raid": { 00:18:37.862 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:37.862 "strip_size_kb": 0, 00:18:37.862 "state": "online", 00:18:37.862 "raid_level": "raid1", 00:18:37.862 "superblock": true, 00:18:37.862 "num_base_bdevs": 2, 00:18:37.862 "num_base_bdevs_discovered": 2, 00:18:37.862 "num_base_bdevs_operational": 2, 00:18:37.862 "base_bdevs_list": [ 00:18:37.862 { 00:18:37.862 "name": "pt1", 00:18:37.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.862 "is_configured": true, 00:18:37.862 "data_offset": 256, 00:18:37.862 "data_size": 7936 00:18:37.862 }, 00:18:37.862 { 00:18:37.862 "name": "pt2", 00:18:37.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.862 "is_configured": true, 00:18:37.862 "data_offset": 256, 00:18:37.862 "data_size": 7936 00:18:37.862 } 00:18:37.862 ] 00:18:37.862 } 00:18:37.862 } 00:18:37.862 }' 00:18:37.862 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.862 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:37.862 pt2' 00:18:37.862 10:12:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.862 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:38.121 [2024-11-19 10:12:52.148199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7d966e6f-9302-4ebc-90b5-ab36944d0ed0 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 7d966e6f-9302-4ebc-90b5-ab36944d0ed0 ']' 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.121 [2024-11-19 10:12:52.195798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.121 [2024-11-19 10:12:52.195948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.121 [2024-11-19 10:12:52.196199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.121 [2024-11-19 10:12:52.196416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.121 [2024-11-19 10:12:52.196573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:38.121 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.122 [2024-11-19 10:12:52.339895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:38.122 [2024-11-19 10:12:52.342715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:38.122 [2024-11-19 10:12:52.342944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:38.122 [2024-11-19 10:12:52.343169] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:38.122 [2024-11-19 10:12:52.343369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.122 [2024-11-19 10:12:52.343560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:38.122 request: 00:18:38.122 { 00:18:38.122 "name": "raid_bdev1", 00:18:38.122 "raid_level": "raid1", 00:18:38.122 "base_bdevs": [ 00:18:38.122 "malloc1", 00:18:38.122 "malloc2" 00:18:38.122 ], 00:18:38.122 "superblock": false, 00:18:38.122 "method": "bdev_raid_create", 00:18:38.122 "req_id": 1 00:18:38.122 } 00:18:38.122 Got JSON-RPC error response 00:18:38.122 response: 00:18:38.122 { 00:18:38.122 "code": -17, 00:18:38.122 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:38.122 } 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.122 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:38.381 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.382 [2024-11-19 10:12:52.407966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:38.382 [2024-11-19 10:12:52.408030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.382 [2024-11-19 10:12:52.408056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:38.382 [2024-11-19 10:12:52.408073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.382 [2024-11-19 10:12:52.411124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.382 [2024-11-19 10:12:52.411173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.382 [2024-11-19 10:12:52.411264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:38.382 [2024-11-19 10:12:52.411347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.382 pt1 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.382 "name": "raid_bdev1", 00:18:38.382 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:38.382 "strip_size_kb": 0, 00:18:38.382 "state": "configuring", 00:18:38.382 "raid_level": "raid1", 00:18:38.382 "superblock": true, 00:18:38.382 "num_base_bdevs": 2, 00:18:38.382 "num_base_bdevs_discovered": 1, 00:18:38.382 "num_base_bdevs_operational": 2, 00:18:38.382 "base_bdevs_list": [ 00:18:38.382 { 00:18:38.382 "name": "pt1", 00:18:38.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.382 "is_configured": true, 00:18:38.382 "data_offset": 256, 00:18:38.382 "data_size": 7936 00:18:38.382 }, 00:18:38.382 { 00:18:38.382 "name": null, 00:18:38.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.382 "is_configured": false, 00:18:38.382 "data_offset": 256, 00:18:38.382 "data_size": 7936 00:18:38.382 } 00:18:38.382 ] 00:18:38.382 }' 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.382 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.949 [2024-11-19 10:12:52.916655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.949 [2024-11-19 10:12:52.916785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.949 [2024-11-19 10:12:52.916851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:38.949 [2024-11-19 10:12:52.916871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.949 [2024-11-19 10:12:52.917560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.949 [2024-11-19 10:12:52.917615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.949 [2024-11-19 10:12:52.917725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.949 [2024-11-19 10:12:52.917772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.949 [2024-11-19 10:12:52.917957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:38.949 [2024-11-19 10:12:52.917978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:38.949 [2024-11-19 10:12:52.918294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:38.949 [2024-11-19 10:12:52.918505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:38.949 [2024-11-19 10:12:52.918537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:38.949 [2024-11-19 10:12:52.918734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.949 pt2 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.949 "name": "raid_bdev1", 00:18:38.949 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:38.949 "strip_size_kb": 0, 00:18:38.949 "state": "online", 00:18:38.949 "raid_level": "raid1", 00:18:38.949 "superblock": true, 00:18:38.949 "num_base_bdevs": 2, 00:18:38.949 "num_base_bdevs_discovered": 2, 00:18:38.949 "num_base_bdevs_operational": 2, 00:18:38.949 "base_bdevs_list": [ 00:18:38.949 { 00:18:38.949 "name": "pt1", 00:18:38.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.949 "is_configured": true, 00:18:38.949 "data_offset": 256, 00:18:38.949 "data_size": 7936 00:18:38.949 }, 00:18:38.949 { 00:18:38.949 "name": "pt2", 00:18:38.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.949 "is_configured": true, 00:18:38.949 "data_offset": 256, 00:18:38.949 "data_size": 7936 00:18:38.949 } 00:18:38.949 ] 00:18:38.949 }' 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.949 10:12:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.516 [2024-11-19 10:12:53.477226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.516 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.516 "name": "raid_bdev1", 00:18:39.516 "aliases": [ 00:18:39.517 "7d966e6f-9302-4ebc-90b5-ab36944d0ed0" 00:18:39.517 ], 00:18:39.517 "product_name": "Raid Volume", 00:18:39.517 "block_size": 4096, 00:18:39.517 "num_blocks": 7936, 00:18:39.517 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:39.517 "assigned_rate_limits": { 00:18:39.517 "rw_ios_per_sec": 0, 00:18:39.517 "rw_mbytes_per_sec": 0, 00:18:39.517 "r_mbytes_per_sec": 0, 00:18:39.517 "w_mbytes_per_sec": 0 00:18:39.517 }, 00:18:39.517 "claimed": false, 00:18:39.517 "zoned": false, 00:18:39.517 "supported_io_types": { 00:18:39.517 "read": true, 00:18:39.517 "write": true, 00:18:39.517 "unmap": false, 00:18:39.517 "flush": false, 00:18:39.517 "reset": true, 00:18:39.517 "nvme_admin": false, 00:18:39.517 "nvme_io": false, 00:18:39.517 "nvme_io_md": false, 00:18:39.517 "write_zeroes": true, 00:18:39.517 "zcopy": false, 00:18:39.517 "get_zone_info": false, 00:18:39.517 "zone_management": false, 00:18:39.517 "zone_append": false, 00:18:39.517 "compare": false, 00:18:39.517 "compare_and_write": false, 00:18:39.517 "abort": false, 00:18:39.517 "seek_hole": false, 00:18:39.517 "seek_data": false, 00:18:39.517 "copy": false, 00:18:39.517 "nvme_iov_md": false 00:18:39.517 }, 00:18:39.517 "memory_domains": [ 00:18:39.517 { 00:18:39.517 "dma_device_id": "system", 00:18:39.517 "dma_device_type": 1 00:18:39.517 }, 00:18:39.517 { 00:18:39.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.517 "dma_device_type": 2 00:18:39.517 }, 00:18:39.517 { 00:18:39.517 "dma_device_id": "system", 00:18:39.517 "dma_device_type": 1 00:18:39.517 }, 00:18:39.517 { 00:18:39.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.517 "dma_device_type": 2 00:18:39.517 } 00:18:39.517 ], 00:18:39.517 "driver_specific": { 00:18:39.517 "raid": { 00:18:39.517 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:39.517 "strip_size_kb": 0, 00:18:39.517 "state": "online", 00:18:39.517 "raid_level": "raid1", 00:18:39.517 "superblock": true, 00:18:39.517 "num_base_bdevs": 2, 00:18:39.517 "num_base_bdevs_discovered": 2, 00:18:39.517 "num_base_bdevs_operational": 2, 00:18:39.517 "base_bdevs_list": [ 00:18:39.517 { 00:18:39.517 "name": "pt1", 00:18:39.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.517 "is_configured": true, 00:18:39.517 "data_offset": 256, 00:18:39.517 "data_size": 7936 00:18:39.517 }, 00:18:39.517 { 00:18:39.517 "name": "pt2", 00:18:39.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.517 "is_configured": true, 00:18:39.517 "data_offset": 256, 00:18:39.517 "data_size": 7936 00:18:39.517 } 00:18:39.517 ] 00:18:39.517 } 00:18:39.517 } 00:18:39.517 }' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:39.517 pt2' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.517 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.776 [2024-11-19 10:12:53.761196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 7d966e6f-9302-4ebc-90b5-ab36944d0ed0 '!=' 7d966e6f-9302-4ebc-90b5-ab36944d0ed0 ']' 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.776 [2024-11-19 10:12:53.808922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.776 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.776 "name": "raid_bdev1", 00:18:39.776 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:39.776 "strip_size_kb": 0, 00:18:39.776 "state": "online", 00:18:39.776 "raid_level": "raid1", 00:18:39.776 "superblock": true, 00:18:39.776 "num_base_bdevs": 2, 00:18:39.776 "num_base_bdevs_discovered": 1, 00:18:39.776 "num_base_bdevs_operational": 1, 00:18:39.776 "base_bdevs_list": [ 00:18:39.776 { 00:18:39.776 "name": null, 00:18:39.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.776 "is_configured": false, 00:18:39.776 "data_offset": 0, 00:18:39.776 "data_size": 7936 00:18:39.777 }, 00:18:39.777 { 00:18:39.777 "name": "pt2", 00:18:39.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.777 "is_configured": true, 00:18:39.777 "data_offset": 256, 00:18:39.777 "data_size": 7936 00:18:39.777 } 00:18:39.777 ] 00:18:39.777 }' 00:18:39.777 10:12:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.777 10:12:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.342 [2024-11-19 10:12:54.325092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.342 [2024-11-19 10:12:54.325277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.342 [2024-11-19 10:12:54.325416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.342 [2024-11-19 10:12:54.325503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.342 [2024-11-19 10:12:54.325523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.342 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.342 [2024-11-19 10:12:54.405064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.342 [2024-11-19 10:12:54.405261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.342 [2024-11-19 10:12:54.405299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:40.342 [2024-11-19 10:12:54.405318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.342 [2024-11-19 10:12:54.408583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.342 [2024-11-19 10:12:54.408747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.342 [2024-11-19 10:12:54.408901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.343 [2024-11-19 10:12:54.408976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.343 [2024-11-19 10:12:54.409115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:40.343 [2024-11-19 10:12:54.409156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:40.343 [2024-11-19 10:12:54.409455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.343 [2024-11-19 10:12:54.409698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:40.343 [2024-11-19 10:12:54.409714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:40.343 [2024-11-19 10:12:54.409992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.343 pt2 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.343 "name": "raid_bdev1", 00:18:40.343 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:40.343 "strip_size_kb": 0, 00:18:40.343 "state": "online", 00:18:40.343 "raid_level": "raid1", 00:18:40.343 "superblock": true, 00:18:40.343 "num_base_bdevs": 2, 00:18:40.343 "num_base_bdevs_discovered": 1, 00:18:40.343 "num_base_bdevs_operational": 1, 00:18:40.343 "base_bdevs_list": [ 00:18:40.343 { 00:18:40.343 "name": null, 00:18:40.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.343 "is_configured": false, 00:18:40.343 "data_offset": 256, 00:18:40.343 "data_size": 7936 00:18:40.343 }, 00:18:40.343 { 00:18:40.343 "name": "pt2", 00:18:40.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.343 "is_configured": true, 00:18:40.343 "data_offset": 256, 00:18:40.343 "data_size": 7936 00:18:40.343 } 00:18:40.343 ] 00:18:40.343 }' 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.343 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.909 [2024-11-19 10:12:54.913431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.909 [2024-11-19 10:12:54.913633] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.909 [2024-11-19 10:12:54.913870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.909 [2024-11-19 10:12:54.914071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.909 [2024-11-19 10:12:54.914312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.909 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.909 [2024-11-19 10:12:54.993422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.909 [2024-11-19 10:12:54.993644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.909 [2024-11-19 10:12:54.993687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:40.909 [2024-11-19 10:12:54.993703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.909 [2024-11-19 10:12:54.996882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.909 [2024-11-19 10:12:54.996926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.909 [2024-11-19 10:12:54.997034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.909 [2024-11-19 10:12:54.997104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.909 [2024-11-19 10:12:54.997283] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:40.909 [2024-11-19 10:12:54.997301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.909 [2024-11-19 10:12:54.997323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:40.909 [2024-11-19 10:12:54.997404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.909 [2024-11-19 10:12:54.997513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:40.909 [2024-11-19 10:12:54.997529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:40.910 [2024-11-19 10:12:54.997873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.910 [2024-11-19 10:12:54.998080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:40.910 [2024-11-19 10:12:54.998101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:40.910 [2024-11-19 10:12:54.998332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.910 pt1 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.910 10:12:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.910 "name": "raid_bdev1", 00:18:40.910 "uuid": "7d966e6f-9302-4ebc-90b5-ab36944d0ed0", 00:18:40.910 "strip_size_kb": 0, 00:18:40.910 "state": "online", 00:18:40.910 "raid_level": "raid1", 00:18:40.910 "superblock": true, 00:18:40.910 "num_base_bdevs": 2, 00:18:40.910 "num_base_bdevs_discovered": 1, 00:18:40.910 "num_base_bdevs_operational": 1, 00:18:40.910 "base_bdevs_list": [ 00:18:40.910 { 00:18:40.910 "name": null, 00:18:40.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.910 "is_configured": false, 00:18:40.910 "data_offset": 256, 00:18:40.910 "data_size": 7936 00:18:40.910 }, 00:18:40.910 { 00:18:40.910 "name": "pt2", 00:18:40.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.910 "is_configured": true, 00:18:40.910 "data_offset": 256, 00:18:40.910 "data_size": 7936 00:18:40.910 } 00:18:40.910 ] 00:18:40.910 }' 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.910 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:41.477 [2024-11-19 10:12:55.582111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 7d966e6f-9302-4ebc-90b5-ab36944d0ed0 '!=' 7d966e6f-9302-4ebc-90b5-ab36944d0ed0 ']' 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86605 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86605 ']' 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86605 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86605 00:18:41.477 killing process with pid 86605 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86605' 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86605 00:18:41.477 [2024-11-19 10:12:55.662857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.477 10:12:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86605 00:18:41.477 [2024-11-19 10:12:55.662986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.477 [2024-11-19 10:12:55.663062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.477 [2024-11-19 10:12:55.663085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:41.735 [2024-11-19 10:12:55.865766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.125 ************************************ 00:18:43.125 END TEST raid_superblock_test_4k 00:18:43.125 ************************************ 00:18:43.125 10:12:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:43.125 00:18:43.125 real 0m6.933s 00:18:43.125 user 0m10.768s 00:18:43.125 sys 0m1.093s 00:18:43.125 10:12:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.125 10:12:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.125 10:12:57 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:43.125 10:12:57 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:43.125 10:12:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:43.125 10:12:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.125 10:12:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.125 ************************************ 00:18:43.125 START TEST raid_rebuild_test_sb_4k 00:18:43.125 ************************************ 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:43.125 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86939 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86939 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86939 ']' 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.126 10:12:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.126 [2024-11-19 10:12:57.249939] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:18:43.126 [2024-11-19 10:12:57.250395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86939 ] 00:18:43.126 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:43.126 Zero copy mechanism will not be used. 00:18:43.387 [2024-11-19 10:12:57.440121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.387 [2024-11-19 10:12:57.592255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.645 [2024-11-19 10:12:57.825455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.645 [2024-11-19 10:12:57.825868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.212 BaseBdev1_malloc 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.212 [2024-11-19 10:12:58.333005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:44.212 [2024-11-19 10:12:58.333106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.212 [2024-11-19 10:12:58.333147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:44.212 [2024-11-19 10:12:58.333169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.212 [2024-11-19 10:12:58.336309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.212 [2024-11-19 10:12:58.336373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.212 BaseBdev1 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.212 BaseBdev2_malloc 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.212 [2024-11-19 10:12:58.394558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:44.212 [2024-11-19 10:12:58.394663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.212 [2024-11-19 10:12:58.394712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:44.212 [2024-11-19 10:12:58.394734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.212 [2024-11-19 10:12:58.398045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.212 [2024-11-19 10:12:58.398110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:44.212 BaseBdev2 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.212 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.470 spare_malloc 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.470 spare_delay 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.470 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.470 [2024-11-19 10:12:58.477370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:44.470 [2024-11-19 10:12:58.477479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.470 [2024-11-19 10:12:58.477518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:44.470 [2024-11-19 10:12:58.477536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.470 [2024-11-19 10:12:58.480635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.470 [2024-11-19 10:12:58.480908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:44.470 spare 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.471 [2024-11-19 10:12:58.485670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.471 [2024-11-19 10:12:58.488430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.471 [2024-11-19 10:12:58.488711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:44.471 [2024-11-19 10:12:58.488735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:44.471 [2024-11-19 10:12:58.489129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:44.471 [2024-11-19 10:12:58.489371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:44.471 [2024-11-19 10:12:58.489394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:44.471 [2024-11-19 10:12:58.489686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.471 "name": "raid_bdev1", 00:18:44.471 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:44.471 "strip_size_kb": 0, 00:18:44.471 "state": "online", 00:18:44.471 "raid_level": "raid1", 00:18:44.471 "superblock": true, 00:18:44.471 "num_base_bdevs": 2, 00:18:44.471 "num_base_bdevs_discovered": 2, 00:18:44.471 "num_base_bdevs_operational": 2, 00:18:44.471 "base_bdevs_list": [ 00:18:44.471 { 00:18:44.471 "name": "BaseBdev1", 00:18:44.471 "uuid": "6aa49457-e2fc-5fcb-9411-75ac6a1fab86", 00:18:44.471 "is_configured": true, 00:18:44.471 "data_offset": 256, 00:18:44.471 "data_size": 7936 00:18:44.471 }, 00:18:44.471 { 00:18:44.471 "name": "BaseBdev2", 00:18:44.471 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:44.471 "is_configured": true, 00:18:44.471 "data_offset": 256, 00:18:44.471 "data_size": 7936 00:18:44.471 } 00:18:44.471 ] 00:18:44.471 }' 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.471 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.729 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.729 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.729 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.729 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:44.988 [2024-11-19 10:12:58.962405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.988 10:12:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.988 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:45.246 [2024-11-19 10:12:59.370073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:45.246 /dev/nbd0 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.246 1+0 records in 00:18:45.246 1+0 records out 00:18:45.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250868 s, 16.3 MB/s 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:45.246 10:12:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:46.182 7936+0 records in 00:18:46.182 7936+0 records out 00:18:46.182 32505856 bytes (33 MB, 31 MiB) copied, 0.964965 s, 33.7 MB/s 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.182 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.748 [2024-11-19 10:13:00.739466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.748 [2024-11-19 10:13:00.757834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.748 "name": "raid_bdev1", 00:18:46.748 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:46.748 "strip_size_kb": 0, 00:18:46.748 "state": "online", 00:18:46.748 "raid_level": "raid1", 00:18:46.748 "superblock": true, 00:18:46.748 "num_base_bdevs": 2, 00:18:46.748 "num_base_bdevs_discovered": 1, 00:18:46.748 "num_base_bdevs_operational": 1, 00:18:46.748 "base_bdevs_list": [ 00:18:46.748 { 00:18:46.748 "name": null, 00:18:46.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.748 "is_configured": false, 00:18:46.748 "data_offset": 0, 00:18:46.748 "data_size": 7936 00:18:46.748 }, 00:18:46.748 { 00:18:46.748 "name": "BaseBdev2", 00:18:46.748 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:46.748 "is_configured": true, 00:18:46.748 "data_offset": 256, 00:18:46.748 "data_size": 7936 00:18:46.748 } 00:18:46.748 ] 00:18:46.748 }' 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.748 10:13:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.008 10:13:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.008 10:13:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.008 10:13:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.008 [2024-11-19 10:13:01.230107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.266 [2024-11-19 10:13:01.249255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:47.266 10:13:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.266 10:13:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:47.266 [2024-11-19 10:13:01.252214] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.250 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.250 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.250 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.250 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.250 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.250 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.251 "name": "raid_bdev1", 00:18:48.251 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:48.251 "strip_size_kb": 0, 00:18:48.251 "state": "online", 00:18:48.251 "raid_level": "raid1", 00:18:48.251 "superblock": true, 00:18:48.251 "num_base_bdevs": 2, 00:18:48.251 "num_base_bdevs_discovered": 2, 00:18:48.251 "num_base_bdevs_operational": 2, 00:18:48.251 "process": { 00:18:48.251 "type": "rebuild", 00:18:48.251 "target": "spare", 00:18:48.251 "progress": { 00:18:48.251 "blocks": 2304, 00:18:48.251 "percent": 29 00:18:48.251 } 00:18:48.251 }, 00:18:48.251 "base_bdevs_list": [ 00:18:48.251 { 00:18:48.251 "name": "spare", 00:18:48.251 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:48.251 "is_configured": true, 00:18:48.251 "data_offset": 256, 00:18:48.251 "data_size": 7936 00:18:48.251 }, 00:18:48.251 { 00:18:48.251 "name": "BaseBdev2", 00:18:48.251 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:48.251 "is_configured": true, 00:18:48.251 "data_offset": 256, 00:18:48.251 "data_size": 7936 00:18:48.251 } 00:18:48.251 ] 00:18:48.251 }' 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.251 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.251 [2024-11-19 10:13:02.426202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.251 [2024-11-19 10:13:02.464366] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:48.251 [2024-11-19 10:13:02.464511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.251 [2024-11-19 10:13:02.464539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.251 [2024-11-19 10:13:02.464556] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.509 "name": "raid_bdev1", 00:18:48.509 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:48.509 "strip_size_kb": 0, 00:18:48.509 "state": "online", 00:18:48.509 "raid_level": "raid1", 00:18:48.509 "superblock": true, 00:18:48.509 "num_base_bdevs": 2, 00:18:48.509 "num_base_bdevs_discovered": 1, 00:18:48.509 "num_base_bdevs_operational": 1, 00:18:48.509 "base_bdevs_list": [ 00:18:48.509 { 00:18:48.509 "name": null, 00:18:48.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.509 "is_configured": false, 00:18:48.509 "data_offset": 0, 00:18:48.509 "data_size": 7936 00:18:48.509 }, 00:18:48.509 { 00:18:48.509 "name": "BaseBdev2", 00:18:48.509 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:48.509 "is_configured": true, 00:18:48.509 "data_offset": 256, 00:18:48.509 "data_size": 7936 00:18:48.509 } 00:18:48.509 ] 00:18:48.509 }' 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.509 10:13:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.076 "name": "raid_bdev1", 00:18:49.076 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:49.076 "strip_size_kb": 0, 00:18:49.076 "state": "online", 00:18:49.076 "raid_level": "raid1", 00:18:49.076 "superblock": true, 00:18:49.076 "num_base_bdevs": 2, 00:18:49.076 "num_base_bdevs_discovered": 1, 00:18:49.076 "num_base_bdevs_operational": 1, 00:18:49.076 "base_bdevs_list": [ 00:18:49.076 { 00:18:49.076 "name": null, 00:18:49.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.076 "is_configured": false, 00:18:49.076 "data_offset": 0, 00:18:49.076 "data_size": 7936 00:18:49.076 }, 00:18:49.076 { 00:18:49.076 "name": "BaseBdev2", 00:18:49.076 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:49.076 "is_configured": true, 00:18:49.076 "data_offset": 256, 00:18:49.076 "data_size": 7936 00:18:49.076 } 00:18:49.076 ] 00:18:49.076 }' 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.076 [2024-11-19 10:13:03.175161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.076 [2024-11-19 10:13:03.192176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.076 10:13:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:49.076 [2024-11-19 10:13:03.195051] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.011 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.270 "name": "raid_bdev1", 00:18:50.270 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:50.270 "strip_size_kb": 0, 00:18:50.270 "state": "online", 00:18:50.270 "raid_level": "raid1", 00:18:50.270 "superblock": true, 00:18:50.270 "num_base_bdevs": 2, 00:18:50.270 "num_base_bdevs_discovered": 2, 00:18:50.270 "num_base_bdevs_operational": 2, 00:18:50.270 "process": { 00:18:50.270 "type": "rebuild", 00:18:50.270 "target": "spare", 00:18:50.270 "progress": { 00:18:50.270 "blocks": 2560, 00:18:50.270 "percent": 32 00:18:50.270 } 00:18:50.270 }, 00:18:50.270 "base_bdevs_list": [ 00:18:50.270 { 00:18:50.270 "name": "spare", 00:18:50.270 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:50.270 "is_configured": true, 00:18:50.270 "data_offset": 256, 00:18:50.270 "data_size": 7936 00:18:50.270 }, 00:18:50.270 { 00:18:50.270 "name": "BaseBdev2", 00:18:50.270 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:50.270 "is_configured": true, 00:18:50.270 "data_offset": 256, 00:18:50.270 "data_size": 7936 00:18:50.270 } 00:18:50.270 ] 00:18:50.270 }' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:50.270 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=753 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.270 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.270 "name": "raid_bdev1", 00:18:50.270 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:50.270 "strip_size_kb": 0, 00:18:50.270 "state": "online", 00:18:50.270 "raid_level": "raid1", 00:18:50.270 "superblock": true, 00:18:50.270 "num_base_bdevs": 2, 00:18:50.270 "num_base_bdevs_discovered": 2, 00:18:50.270 "num_base_bdevs_operational": 2, 00:18:50.270 "process": { 00:18:50.270 "type": "rebuild", 00:18:50.270 "target": "spare", 00:18:50.270 "progress": { 00:18:50.270 "blocks": 2816, 00:18:50.270 "percent": 35 00:18:50.270 } 00:18:50.270 }, 00:18:50.271 "base_bdevs_list": [ 00:18:50.271 { 00:18:50.271 "name": "spare", 00:18:50.271 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:50.271 "is_configured": true, 00:18:50.271 "data_offset": 256, 00:18:50.271 "data_size": 7936 00:18:50.271 }, 00:18:50.271 { 00:18:50.271 "name": "BaseBdev2", 00:18:50.271 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:50.271 "is_configured": true, 00:18:50.271 "data_offset": 256, 00:18:50.271 "data_size": 7936 00:18:50.271 } 00:18:50.271 ] 00:18:50.271 }' 00:18:50.271 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.271 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.271 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.529 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.529 10:13:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.462 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.463 "name": "raid_bdev1", 00:18:51.463 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:51.463 "strip_size_kb": 0, 00:18:51.463 "state": "online", 00:18:51.463 "raid_level": "raid1", 00:18:51.463 "superblock": true, 00:18:51.463 "num_base_bdevs": 2, 00:18:51.463 "num_base_bdevs_discovered": 2, 00:18:51.463 "num_base_bdevs_operational": 2, 00:18:51.463 "process": { 00:18:51.463 "type": "rebuild", 00:18:51.463 "target": "spare", 00:18:51.463 "progress": { 00:18:51.463 "blocks": 5888, 00:18:51.463 "percent": 74 00:18:51.463 } 00:18:51.463 }, 00:18:51.463 "base_bdevs_list": [ 00:18:51.463 { 00:18:51.463 "name": "spare", 00:18:51.463 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:51.463 "is_configured": true, 00:18:51.463 "data_offset": 256, 00:18:51.463 "data_size": 7936 00:18:51.463 }, 00:18:51.463 { 00:18:51.463 "name": "BaseBdev2", 00:18:51.463 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:51.463 "is_configured": true, 00:18:51.463 "data_offset": 256, 00:18:51.463 "data_size": 7936 00:18:51.463 } 00:18:51.463 ] 00:18:51.463 }' 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.463 10:13:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.396 [2024-11-19 10:13:06.324348] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:52.396 [2024-11-19 10:13:06.324490] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:52.396 [2024-11-19 10:13:06.324686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.654 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.654 "name": "raid_bdev1", 00:18:52.654 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:52.654 "strip_size_kb": 0, 00:18:52.654 "state": "online", 00:18:52.654 "raid_level": "raid1", 00:18:52.654 "superblock": true, 00:18:52.654 "num_base_bdevs": 2, 00:18:52.654 "num_base_bdevs_discovered": 2, 00:18:52.654 "num_base_bdevs_operational": 2, 00:18:52.654 "base_bdevs_list": [ 00:18:52.654 { 00:18:52.654 "name": "spare", 00:18:52.654 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:52.654 "is_configured": true, 00:18:52.654 "data_offset": 256, 00:18:52.654 "data_size": 7936 00:18:52.654 }, 00:18:52.654 { 00:18:52.654 "name": "BaseBdev2", 00:18:52.654 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:52.654 "is_configured": true, 00:18:52.655 "data_offset": 256, 00:18:52.655 "data_size": 7936 00:18:52.655 } 00:18:52.655 ] 00:18:52.655 }' 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.655 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.914 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.914 "name": "raid_bdev1", 00:18:52.914 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:52.914 "strip_size_kb": 0, 00:18:52.914 "state": "online", 00:18:52.914 "raid_level": "raid1", 00:18:52.914 "superblock": true, 00:18:52.914 "num_base_bdevs": 2, 00:18:52.914 "num_base_bdevs_discovered": 2, 00:18:52.914 "num_base_bdevs_operational": 2, 00:18:52.914 "base_bdevs_list": [ 00:18:52.914 { 00:18:52.914 "name": "spare", 00:18:52.914 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:52.914 "is_configured": true, 00:18:52.914 "data_offset": 256, 00:18:52.914 "data_size": 7936 00:18:52.914 }, 00:18:52.914 { 00:18:52.914 "name": "BaseBdev2", 00:18:52.914 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:52.914 "is_configured": true, 00:18:52.914 "data_offset": 256, 00:18:52.914 "data_size": 7936 00:18:52.914 } 00:18:52.914 ] 00:18:52.914 }' 00:18:52.914 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.914 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.914 10:13:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.914 "name": "raid_bdev1", 00:18:52.914 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:52.914 "strip_size_kb": 0, 00:18:52.914 "state": "online", 00:18:52.914 "raid_level": "raid1", 00:18:52.914 "superblock": true, 00:18:52.914 "num_base_bdevs": 2, 00:18:52.914 "num_base_bdevs_discovered": 2, 00:18:52.914 "num_base_bdevs_operational": 2, 00:18:52.914 "base_bdevs_list": [ 00:18:52.914 { 00:18:52.914 "name": "spare", 00:18:52.914 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:52.914 "is_configured": true, 00:18:52.914 "data_offset": 256, 00:18:52.914 "data_size": 7936 00:18:52.914 }, 00:18:52.914 { 00:18:52.914 "name": "BaseBdev2", 00:18:52.914 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:52.914 "is_configured": true, 00:18:52.914 "data_offset": 256, 00:18:52.914 "data_size": 7936 00:18:52.914 } 00:18:52.914 ] 00:18:52.914 }' 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.914 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.481 [2024-11-19 10:13:07.518594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.481 [2024-11-19 10:13:07.518810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.481 [2024-11-19 10:13:07.518950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.481 [2024-11-19 10:13:07.519061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.481 [2024-11-19 10:13:07.519084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.481 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:53.740 /dev/nbd0 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.740 1+0 records in 00:18:53.740 1+0 records out 00:18:53.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003035 s, 13.5 MB/s 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.740 10:13:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:53.999 /dev/nbd1 00:18:53.999 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:54.258 1+0 records in 00:18:54.258 1+0 records out 00:18:54.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440193 s, 9.3 MB/s 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.258 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.517 10:13:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 [2024-11-19 10:13:09.122875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:55.088 [2024-11-19 10:13:09.122950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.088 [2024-11-19 10:13:09.122987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:55.088 [2024-11-19 10:13:09.123004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.088 [2024-11-19 10:13:09.126209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.088 [2024-11-19 10:13:09.126257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:55.088 [2024-11-19 10:13:09.126412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:55.088 [2024-11-19 10:13:09.126489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.088 [2024-11-19 10:13:09.126696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.088 spare 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 [2024-11-19 10:13:09.226874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:55.088 [2024-11-19 10:13:09.226935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:55.088 [2024-11-19 10:13:09.227413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:55.088 [2024-11-19 10:13:09.227721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:55.088 [2024-11-19 10:13:09.227745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:55.088 [2024-11-19 10:13:09.228059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.088 "name": "raid_bdev1", 00:18:55.088 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:55.088 "strip_size_kb": 0, 00:18:55.088 "state": "online", 00:18:55.088 "raid_level": "raid1", 00:18:55.088 "superblock": true, 00:18:55.088 "num_base_bdevs": 2, 00:18:55.088 "num_base_bdevs_discovered": 2, 00:18:55.088 "num_base_bdevs_operational": 2, 00:18:55.088 "base_bdevs_list": [ 00:18:55.088 { 00:18:55.088 "name": "spare", 00:18:55.088 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:55.088 "is_configured": true, 00:18:55.088 "data_offset": 256, 00:18:55.088 "data_size": 7936 00:18:55.088 }, 00:18:55.088 { 00:18:55.088 "name": "BaseBdev2", 00:18:55.088 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:55.088 "is_configured": true, 00:18:55.088 "data_offset": 256, 00:18:55.088 "data_size": 7936 00:18:55.088 } 00:18:55.088 ] 00:18:55.088 }' 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.088 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.659 "name": "raid_bdev1", 00:18:55.659 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:55.659 "strip_size_kb": 0, 00:18:55.659 "state": "online", 00:18:55.659 "raid_level": "raid1", 00:18:55.659 "superblock": true, 00:18:55.659 "num_base_bdevs": 2, 00:18:55.659 "num_base_bdevs_discovered": 2, 00:18:55.659 "num_base_bdevs_operational": 2, 00:18:55.659 "base_bdevs_list": [ 00:18:55.659 { 00:18:55.659 "name": "spare", 00:18:55.659 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:55.659 "is_configured": true, 00:18:55.659 "data_offset": 256, 00:18:55.659 "data_size": 7936 00:18:55.659 }, 00:18:55.659 { 00:18:55.659 "name": "BaseBdev2", 00:18:55.659 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:55.659 "is_configured": true, 00:18:55.659 "data_offset": 256, 00:18:55.659 "data_size": 7936 00:18:55.659 } 00:18:55.659 ] 00:18:55.659 }' 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.659 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.917 [2024-11-19 10:13:09.952260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.917 10:13:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.917 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.917 "name": "raid_bdev1", 00:18:55.917 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:55.917 "strip_size_kb": 0, 00:18:55.917 "state": "online", 00:18:55.917 "raid_level": "raid1", 00:18:55.917 "superblock": true, 00:18:55.917 "num_base_bdevs": 2, 00:18:55.917 "num_base_bdevs_discovered": 1, 00:18:55.917 "num_base_bdevs_operational": 1, 00:18:55.917 "base_bdevs_list": [ 00:18:55.917 { 00:18:55.917 "name": null, 00:18:55.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.917 "is_configured": false, 00:18:55.917 "data_offset": 0, 00:18:55.917 "data_size": 7936 00:18:55.917 }, 00:18:55.917 { 00:18:55.917 "name": "BaseBdev2", 00:18:55.917 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:55.917 "is_configured": true, 00:18:55.917 "data_offset": 256, 00:18:55.917 "data_size": 7936 00:18:55.917 } 00:18:55.917 ] 00:18:55.917 }' 00:18:55.917 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.917 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.483 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.483 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.483 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.483 [2024-11-19 10:13:10.500441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.483 [2024-11-19 10:13:10.500744] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:56.484 [2024-11-19 10:13:10.500778] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:56.484 [2024-11-19 10:13:10.500852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.484 [2024-11-19 10:13:10.517369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:56.484 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.484 10:13:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:56.484 [2024-11-19 10:13:10.520202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.442 "name": "raid_bdev1", 00:18:57.442 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:57.442 "strip_size_kb": 0, 00:18:57.442 "state": "online", 00:18:57.442 "raid_level": "raid1", 00:18:57.442 "superblock": true, 00:18:57.442 "num_base_bdevs": 2, 00:18:57.442 "num_base_bdevs_discovered": 2, 00:18:57.442 "num_base_bdevs_operational": 2, 00:18:57.442 "process": { 00:18:57.442 "type": "rebuild", 00:18:57.442 "target": "spare", 00:18:57.442 "progress": { 00:18:57.442 "blocks": 2304, 00:18:57.442 "percent": 29 00:18:57.442 } 00:18:57.442 }, 00:18:57.442 "base_bdevs_list": [ 00:18:57.442 { 00:18:57.442 "name": "spare", 00:18:57.442 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:57.442 "is_configured": true, 00:18:57.442 "data_offset": 256, 00:18:57.442 "data_size": 7936 00:18:57.442 }, 00:18:57.442 { 00:18:57.442 "name": "BaseBdev2", 00:18:57.442 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:57.442 "is_configured": true, 00:18:57.442 "data_offset": 256, 00:18:57.442 "data_size": 7936 00:18:57.442 } 00:18:57.442 ] 00:18:57.442 }' 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.442 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.700 [2024-11-19 10:13:11.694108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.700 [2024-11-19 10:13:11.732032] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:57.700 [2024-11-19 10:13:11.732185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.700 [2024-11-19 10:13:11.732214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.700 [2024-11-19 10:13:11.732230] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.700 "name": "raid_bdev1", 00:18:57.700 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:57.700 "strip_size_kb": 0, 00:18:57.700 "state": "online", 00:18:57.700 "raid_level": "raid1", 00:18:57.700 "superblock": true, 00:18:57.700 "num_base_bdevs": 2, 00:18:57.700 "num_base_bdevs_discovered": 1, 00:18:57.700 "num_base_bdevs_operational": 1, 00:18:57.700 "base_bdevs_list": [ 00:18:57.700 { 00:18:57.700 "name": null, 00:18:57.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.700 "is_configured": false, 00:18:57.700 "data_offset": 0, 00:18:57.700 "data_size": 7936 00:18:57.700 }, 00:18:57.700 { 00:18:57.700 "name": "BaseBdev2", 00:18:57.700 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:57.700 "is_configured": true, 00:18:57.700 "data_offset": 256, 00:18:57.700 "data_size": 7936 00:18:57.700 } 00:18:57.700 ] 00:18:57.700 }' 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.700 10:13:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 10:13:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:58.266 10:13:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 10:13:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 [2024-11-19 10:13:12.274267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:58.266 [2024-11-19 10:13:12.274367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.266 [2024-11-19 10:13:12.274406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:58.266 [2024-11-19 10:13:12.274427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.266 [2024-11-19 10:13:12.275147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.266 [2024-11-19 10:13:12.275186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:58.266 [2024-11-19 10:13:12.275324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:58.266 [2024-11-19 10:13:12.275360] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.266 [2024-11-19 10:13:12.275377] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:58.266 [2024-11-19 10:13:12.275416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.266 [2024-11-19 10:13:12.292319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:58.266 spare 00:18:58.266 10:13:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 10:13:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:58.266 [2024-11-19 10:13:12.295145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.201 "name": "raid_bdev1", 00:18:59.201 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:59.201 "strip_size_kb": 0, 00:18:59.201 "state": "online", 00:18:59.201 "raid_level": "raid1", 00:18:59.201 "superblock": true, 00:18:59.201 "num_base_bdevs": 2, 00:18:59.201 "num_base_bdevs_discovered": 2, 00:18:59.201 "num_base_bdevs_operational": 2, 00:18:59.201 "process": { 00:18:59.201 "type": "rebuild", 00:18:59.201 "target": "spare", 00:18:59.201 "progress": { 00:18:59.201 "blocks": 2560, 00:18:59.201 "percent": 32 00:18:59.201 } 00:18:59.201 }, 00:18:59.201 "base_bdevs_list": [ 00:18:59.201 { 00:18:59.201 "name": "spare", 00:18:59.201 "uuid": "13359998-022d-577a-b8cf-62e1dc716db0", 00:18:59.201 "is_configured": true, 00:18:59.201 "data_offset": 256, 00:18:59.201 "data_size": 7936 00:18:59.201 }, 00:18:59.201 { 00:18:59.201 "name": "BaseBdev2", 00:18:59.201 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:59.201 "is_configured": true, 00:18:59.201 "data_offset": 256, 00:18:59.201 "data_size": 7936 00:18:59.201 } 00:18:59.201 ] 00:18:59.201 }' 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.201 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.458 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.459 [2024-11-19 10:13:13.452923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.459 [2024-11-19 10:13:13.506594] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.459 [2024-11-19 10:13:13.506721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.459 [2024-11-19 10:13:13.506753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.459 [2024-11-19 10:13:13.506770] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.459 "name": "raid_bdev1", 00:18:59.459 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:18:59.459 "strip_size_kb": 0, 00:18:59.459 "state": "online", 00:18:59.459 "raid_level": "raid1", 00:18:59.459 "superblock": true, 00:18:59.459 "num_base_bdevs": 2, 00:18:59.459 "num_base_bdevs_discovered": 1, 00:18:59.459 "num_base_bdevs_operational": 1, 00:18:59.459 "base_bdevs_list": [ 00:18:59.459 { 00:18:59.459 "name": null, 00:18:59.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.459 "is_configured": false, 00:18:59.459 "data_offset": 0, 00:18:59.459 "data_size": 7936 00:18:59.459 }, 00:18:59.459 { 00:18:59.459 "name": "BaseBdev2", 00:18:59.459 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:18:59.459 "is_configured": true, 00:18:59.459 "data_offset": 256, 00:18:59.459 "data_size": 7936 00:18:59.459 } 00:18:59.459 ] 00:18:59.459 }' 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.459 10:13:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.030 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.030 "name": "raid_bdev1", 00:19:00.030 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:19:00.030 "strip_size_kb": 0, 00:19:00.030 "state": "online", 00:19:00.031 "raid_level": "raid1", 00:19:00.031 "superblock": true, 00:19:00.031 "num_base_bdevs": 2, 00:19:00.031 "num_base_bdevs_discovered": 1, 00:19:00.031 "num_base_bdevs_operational": 1, 00:19:00.031 "base_bdevs_list": [ 00:19:00.031 { 00:19:00.031 "name": null, 00:19:00.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.031 "is_configured": false, 00:19:00.031 "data_offset": 0, 00:19:00.031 "data_size": 7936 00:19:00.031 }, 00:19:00.031 { 00:19:00.031 "name": "BaseBdev2", 00:19:00.031 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:19:00.031 "is_configured": true, 00:19:00.031 "data_offset": 256, 00:19:00.031 "data_size": 7936 00:19:00.031 } 00:19:00.031 ] 00:19:00.031 }' 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.031 [2024-11-19 10:13:14.240797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:00.031 [2024-11-19 10:13:14.241194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.031 [2024-11-19 10:13:14.241248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:00.031 [2024-11-19 10:13:14.241280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.031 [2024-11-19 10:13:14.241960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.031 [2024-11-19 10:13:14.241993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:00.031 [2024-11-19 10:13:14.242125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:00.031 [2024-11-19 10:13:14.242157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.031 [2024-11-19 10:13:14.242177] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:00.031 [2024-11-19 10:13:14.242194] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:00.031 BaseBdev1 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.031 10:13:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.405 "name": "raid_bdev1", 00:19:01.405 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:19:01.405 "strip_size_kb": 0, 00:19:01.405 "state": "online", 00:19:01.405 "raid_level": "raid1", 00:19:01.405 "superblock": true, 00:19:01.405 "num_base_bdevs": 2, 00:19:01.405 "num_base_bdevs_discovered": 1, 00:19:01.405 "num_base_bdevs_operational": 1, 00:19:01.405 "base_bdevs_list": [ 00:19:01.405 { 00:19:01.405 "name": null, 00:19:01.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.405 "is_configured": false, 00:19:01.405 "data_offset": 0, 00:19:01.405 "data_size": 7936 00:19:01.405 }, 00:19:01.405 { 00:19:01.405 "name": "BaseBdev2", 00:19:01.405 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:19:01.405 "is_configured": true, 00:19:01.405 "data_offset": 256, 00:19:01.405 "data_size": 7936 00:19:01.405 } 00:19:01.405 ] 00:19:01.405 }' 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.405 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.664 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.665 "name": "raid_bdev1", 00:19:01.665 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:19:01.665 "strip_size_kb": 0, 00:19:01.665 "state": "online", 00:19:01.665 "raid_level": "raid1", 00:19:01.665 "superblock": true, 00:19:01.665 "num_base_bdevs": 2, 00:19:01.665 "num_base_bdevs_discovered": 1, 00:19:01.665 "num_base_bdevs_operational": 1, 00:19:01.665 "base_bdevs_list": [ 00:19:01.665 { 00:19:01.665 "name": null, 00:19:01.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.665 "is_configured": false, 00:19:01.665 "data_offset": 0, 00:19:01.665 "data_size": 7936 00:19:01.665 }, 00:19:01.665 { 00:19:01.665 "name": "BaseBdev2", 00:19:01.665 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:19:01.665 "is_configured": true, 00:19:01.665 "data_offset": 256, 00:19:01.665 "data_size": 7936 00:19:01.665 } 00:19:01.665 ] 00:19:01.665 }' 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.665 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.923 [2024-11-19 10:13:15.937286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.923 [2024-11-19 10:13:15.937572] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.923 [2024-11-19 10:13:15.937598] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.923 request: 00:19:01.923 { 00:19:01.923 "base_bdev": "BaseBdev1", 00:19:01.923 "raid_bdev": "raid_bdev1", 00:19:01.923 "method": "bdev_raid_add_base_bdev", 00:19:01.923 "req_id": 1 00:19:01.923 } 00:19:01.923 Got JSON-RPC error response 00:19:01.923 response: 00:19:01.923 { 00:19:01.923 "code": -22, 00:19:01.923 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:01.923 } 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.923 10:13:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.859 10:13:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.859 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.859 "name": "raid_bdev1", 00:19:02.859 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:19:02.859 "strip_size_kb": 0, 00:19:02.859 "state": "online", 00:19:02.859 "raid_level": "raid1", 00:19:02.859 "superblock": true, 00:19:02.859 "num_base_bdevs": 2, 00:19:02.859 "num_base_bdevs_discovered": 1, 00:19:02.859 "num_base_bdevs_operational": 1, 00:19:02.859 "base_bdevs_list": [ 00:19:02.859 { 00:19:02.859 "name": null, 00:19:02.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.859 "is_configured": false, 00:19:02.859 "data_offset": 0, 00:19:02.859 "data_size": 7936 00:19:02.859 }, 00:19:02.859 { 00:19:02.859 "name": "BaseBdev2", 00:19:02.859 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:19:02.860 "is_configured": true, 00:19:02.860 "data_offset": 256, 00:19:02.860 "data_size": 7936 00:19:02.860 } 00:19:02.860 ] 00:19:02.860 }' 00:19:02.860 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.860 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.427 "name": "raid_bdev1", 00:19:03.427 "uuid": "4122eb51-6df5-4eb7-aa02-d9f77b29fff3", 00:19:03.427 "strip_size_kb": 0, 00:19:03.427 "state": "online", 00:19:03.427 "raid_level": "raid1", 00:19:03.427 "superblock": true, 00:19:03.427 "num_base_bdevs": 2, 00:19:03.427 "num_base_bdevs_discovered": 1, 00:19:03.427 "num_base_bdevs_operational": 1, 00:19:03.427 "base_bdevs_list": [ 00:19:03.427 { 00:19:03.427 "name": null, 00:19:03.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.427 "is_configured": false, 00:19:03.427 "data_offset": 0, 00:19:03.427 "data_size": 7936 00:19:03.427 }, 00:19:03.427 { 00:19:03.427 "name": "BaseBdev2", 00:19:03.427 "uuid": "a0ab8c12-c6b5-510b-9535-238cb4f07951", 00:19:03.427 "is_configured": true, 00:19:03.427 "data_offset": 256, 00:19:03.427 "data_size": 7936 00:19:03.427 } 00:19:03.427 ] 00:19:03.427 }' 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86939 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86939 ']' 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86939 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.427 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86939 00:19:03.686 killing process with pid 86939 00:19:03.686 Received shutdown signal, test time was about 60.000000 seconds 00:19:03.686 00:19:03.686 Latency(us) 00:19:03.686 [2024-11-19T10:13:17.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.686 [2024-11-19T10:13:17.918Z] =================================================================================================================== 00:19:03.686 [2024-11-19T10:13:17.918Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.686 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.686 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.686 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86939' 00:19:03.686 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86939 00:19:03.686 [2024-11-19 10:13:17.667090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.686 10:13:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86939 00:19:03.686 [2024-11-19 10:13:17.667279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.686 [2024-11-19 10:13:17.667362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.686 [2024-11-19 10:13:17.667384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:03.944 [2024-11-19 10:13:17.965517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.879 10:13:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:04.879 00:19:04.879 real 0m21.967s 00:19:04.879 user 0m29.610s 00:19:04.879 sys 0m2.633s 00:19:04.879 10:13:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.879 ************************************ 00:19:04.879 END TEST raid_rebuild_test_sb_4k 00:19:04.879 ************************************ 00:19:04.879 10:13:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.148 10:13:19 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:05.148 10:13:19 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:05.148 10:13:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:05.148 10:13:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.148 10:13:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.148 ************************************ 00:19:05.148 START TEST raid_state_function_test_sb_md_separate 00:19:05.148 ************************************ 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:05.148 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:05.149 Process raid pid: 87641 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87641 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87641' 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:05.149 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87641 00:19:05.150 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87641 ']' 00:19:05.150 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.150 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.150 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.150 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.150 10:13:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.150 [2024-11-19 10:13:19.265209] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:05.150 [2024-11-19 10:13:19.265699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.414 [2024-11-19 10:13:19.458311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.414 [2024-11-19 10:13:19.635034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.672 [2024-11-19 10:13:19.881705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.672 [2024-11-19 10:13:19.882078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.238 [2024-11-19 10:13:20.235332] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.238 [2024-11-19 10:13:20.235406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.238 [2024-11-19 10:13:20.235426] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.238 [2024-11-19 10:13:20.235443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.238 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.239 "name": "Existed_Raid", 00:19:06.239 "uuid": "04476ffb-7917-4c5a-a476-b0c0f3313b06", 00:19:06.239 "strip_size_kb": 0, 00:19:06.239 "state": "configuring", 00:19:06.239 "raid_level": "raid1", 00:19:06.239 "superblock": true, 00:19:06.239 "num_base_bdevs": 2, 00:19:06.239 "num_base_bdevs_discovered": 0, 00:19:06.239 "num_base_bdevs_operational": 2, 00:19:06.239 "base_bdevs_list": [ 00:19:06.239 { 00:19:06.239 "name": "BaseBdev1", 00:19:06.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.239 "is_configured": false, 00:19:06.239 "data_offset": 0, 00:19:06.239 "data_size": 0 00:19:06.239 }, 00:19:06.239 { 00:19:06.239 "name": "BaseBdev2", 00:19:06.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.239 "is_configured": false, 00:19:06.239 "data_offset": 0, 00:19:06.239 "data_size": 0 00:19:06.239 } 00:19:06.239 ] 00:19:06.239 }' 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.239 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.496 [2024-11-19 10:13:20.719412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.496 [2024-11-19 10:13:20.719475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.496 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.753 [2024-11-19 10:13:20.727395] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.753 [2024-11-19 10:13:20.727462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.753 [2024-11-19 10:13:20.727481] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.753 [2024-11-19 10:13:20.727502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.753 [2024-11-19 10:13:20.777478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.753 BaseBdev1 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.753 [ 00:19:06.753 { 00:19:06.753 "name": "BaseBdev1", 00:19:06.753 "aliases": [ 00:19:06.753 "c9f638c5-9890-4578-b84e-140f7de33c6d" 00:19:06.753 ], 00:19:06.753 "product_name": "Malloc disk", 00:19:06.753 "block_size": 4096, 00:19:06.753 "num_blocks": 8192, 00:19:06.753 "uuid": "c9f638c5-9890-4578-b84e-140f7de33c6d", 00:19:06.753 "md_size": 32, 00:19:06.753 "md_interleave": false, 00:19:06.753 "dif_type": 0, 00:19:06.753 "assigned_rate_limits": { 00:19:06.753 "rw_ios_per_sec": 0, 00:19:06.753 "rw_mbytes_per_sec": 0, 00:19:06.753 "r_mbytes_per_sec": 0, 00:19:06.753 "w_mbytes_per_sec": 0 00:19:06.753 }, 00:19:06.753 "claimed": true, 00:19:06.753 "claim_type": "exclusive_write", 00:19:06.753 "zoned": false, 00:19:06.753 "supported_io_types": { 00:19:06.753 "read": true, 00:19:06.753 "write": true, 00:19:06.753 "unmap": true, 00:19:06.753 "flush": true, 00:19:06.753 "reset": true, 00:19:06.753 "nvme_admin": false, 00:19:06.753 "nvme_io": false, 00:19:06.753 "nvme_io_md": false, 00:19:06.753 "write_zeroes": true, 00:19:06.753 "zcopy": true, 00:19:06.753 "get_zone_info": false, 00:19:06.753 "zone_management": false, 00:19:06.753 "zone_append": false, 00:19:06.753 "compare": false, 00:19:06.753 "compare_and_write": false, 00:19:06.753 "abort": true, 00:19:06.753 "seek_hole": false, 00:19:06.753 "seek_data": false, 00:19:06.753 "copy": true, 00:19:06.753 "nvme_iov_md": false 00:19:06.753 }, 00:19:06.753 "memory_domains": [ 00:19:06.753 { 00:19:06.753 "dma_device_id": "system", 00:19:06.753 "dma_device_type": 1 00:19:06.753 }, 00:19:06.753 { 00:19:06.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.753 "dma_device_type": 2 00:19:06.753 } 00:19:06.753 ], 00:19:06.753 "driver_specific": {} 00:19:06.753 } 00:19:06.753 ] 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.753 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.754 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.754 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.754 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.754 "name": "Existed_Raid", 00:19:06.754 "uuid": "a2f247c5-543c-4176-b556-ea354e6e757d", 00:19:06.754 "strip_size_kb": 0, 00:19:06.754 "state": "configuring", 00:19:06.754 "raid_level": "raid1", 00:19:06.754 "superblock": true, 00:19:06.754 "num_base_bdevs": 2, 00:19:06.754 "num_base_bdevs_discovered": 1, 00:19:06.754 "num_base_bdevs_operational": 2, 00:19:06.754 "base_bdevs_list": [ 00:19:06.754 { 00:19:06.754 "name": "BaseBdev1", 00:19:06.754 "uuid": "c9f638c5-9890-4578-b84e-140f7de33c6d", 00:19:06.754 "is_configured": true, 00:19:06.754 "data_offset": 256, 00:19:06.754 "data_size": 7936 00:19:06.754 }, 00:19:06.754 { 00:19:06.754 "name": "BaseBdev2", 00:19:06.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.754 "is_configured": false, 00:19:06.754 "data_offset": 0, 00:19:06.754 "data_size": 0 00:19:06.754 } 00:19:06.754 ] 00:19:06.754 }' 00:19:06.754 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.754 10:13:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.318 [2024-11-19 10:13:21.365810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:07.318 [2024-11-19 10:13:21.365887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.318 [2024-11-19 10:13:21.373845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.318 [2024-11-19 10:13:21.376458] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.318 [2024-11-19 10:13:21.376522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.318 "name": "Existed_Raid", 00:19:07.318 "uuid": "206d0573-a112-4a1f-9134-50b73cb0caff", 00:19:07.318 "strip_size_kb": 0, 00:19:07.318 "state": "configuring", 00:19:07.318 "raid_level": "raid1", 00:19:07.318 "superblock": true, 00:19:07.318 "num_base_bdevs": 2, 00:19:07.318 "num_base_bdevs_discovered": 1, 00:19:07.318 "num_base_bdevs_operational": 2, 00:19:07.318 "base_bdevs_list": [ 00:19:07.318 { 00:19:07.318 "name": "BaseBdev1", 00:19:07.318 "uuid": "c9f638c5-9890-4578-b84e-140f7de33c6d", 00:19:07.318 "is_configured": true, 00:19:07.318 "data_offset": 256, 00:19:07.318 "data_size": 7936 00:19:07.318 }, 00:19:07.318 { 00:19:07.318 "name": "BaseBdev2", 00:19:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.318 "is_configured": false, 00:19:07.318 "data_offset": 0, 00:19:07.318 "data_size": 0 00:19:07.318 } 00:19:07.318 ] 00:19:07.318 }' 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.318 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 [2024-11-19 10:13:21.971849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.884 [2024-11-19 10:13:21.972183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:07.884 [2024-11-19 10:13:21.972204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:07.884 [2024-11-19 10:13:21.972329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:07.884 BaseBdev2 00:19:07.884 [2024-11-19 10:13:21.972521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:07.884 [2024-11-19 10:13:21.972541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:07.884 [2024-11-19 10:13:21.972660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.884 10:13:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 [ 00:19:07.884 { 00:19:07.884 "name": "BaseBdev2", 00:19:07.884 "aliases": [ 00:19:07.884 "a519ef34-1dc8-41c5-8053-e7aff4cccaf8" 00:19:07.884 ], 00:19:07.884 "product_name": "Malloc disk", 00:19:07.884 "block_size": 4096, 00:19:07.884 "num_blocks": 8192, 00:19:07.884 "uuid": "a519ef34-1dc8-41c5-8053-e7aff4cccaf8", 00:19:07.884 "md_size": 32, 00:19:07.884 "md_interleave": false, 00:19:07.884 "dif_type": 0, 00:19:07.884 "assigned_rate_limits": { 00:19:07.884 "rw_ios_per_sec": 0, 00:19:07.884 "rw_mbytes_per_sec": 0, 00:19:07.884 "r_mbytes_per_sec": 0, 00:19:07.884 "w_mbytes_per_sec": 0 00:19:07.884 }, 00:19:07.884 "claimed": true, 00:19:07.884 "claim_type": "exclusive_write", 00:19:07.884 "zoned": false, 00:19:07.884 "supported_io_types": { 00:19:07.884 "read": true, 00:19:07.884 "write": true, 00:19:07.884 "unmap": true, 00:19:07.884 "flush": true, 00:19:07.884 "reset": true, 00:19:07.884 "nvme_admin": false, 00:19:07.884 "nvme_io": false, 00:19:07.884 "nvme_io_md": false, 00:19:07.884 "write_zeroes": true, 00:19:07.884 "zcopy": true, 00:19:07.884 "get_zone_info": false, 00:19:07.884 "zone_management": false, 00:19:07.884 "zone_append": false, 00:19:07.884 "compare": false, 00:19:07.884 "compare_and_write": false, 00:19:07.884 "abort": true, 00:19:07.884 "seek_hole": false, 00:19:07.884 "seek_data": false, 00:19:07.884 "copy": true, 00:19:07.884 "nvme_iov_md": false 00:19:07.884 }, 00:19:07.884 "memory_domains": [ 00:19:07.884 { 00:19:07.884 "dma_device_id": "system", 00:19:07.884 "dma_device_type": 1 00:19:07.884 }, 00:19:07.884 { 00:19:07.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.884 "dma_device_type": 2 00:19:07.884 } 00:19:07.884 ], 00:19:07.884 "driver_specific": {} 00:19:07.884 } 00:19:07.884 ] 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.884 "name": "Existed_Raid", 00:19:07.884 "uuid": "206d0573-a112-4a1f-9134-50b73cb0caff", 00:19:07.884 "strip_size_kb": 0, 00:19:07.884 "state": "online", 00:19:07.884 "raid_level": "raid1", 00:19:07.884 "superblock": true, 00:19:07.884 "num_base_bdevs": 2, 00:19:07.884 "num_base_bdevs_discovered": 2, 00:19:07.884 "num_base_bdevs_operational": 2, 00:19:07.884 "base_bdevs_list": [ 00:19:07.884 { 00:19:07.884 "name": "BaseBdev1", 00:19:07.884 "uuid": "c9f638c5-9890-4578-b84e-140f7de33c6d", 00:19:07.884 "is_configured": true, 00:19:07.884 "data_offset": 256, 00:19:07.884 "data_size": 7936 00:19:07.884 }, 00:19:07.884 { 00:19:07.884 "name": "BaseBdev2", 00:19:07.884 "uuid": "a519ef34-1dc8-41c5-8053-e7aff4cccaf8", 00:19:07.884 "is_configured": true, 00:19:07.884 "data_offset": 256, 00:19:07.884 "data_size": 7936 00:19:07.884 } 00:19:07.884 ] 00:19:07.884 }' 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.884 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.450 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:08.450 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:08.450 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:08.450 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:08.451 [2024-11-19 10:13:22.484526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.451 "name": "Existed_Raid", 00:19:08.451 "aliases": [ 00:19:08.451 "206d0573-a112-4a1f-9134-50b73cb0caff" 00:19:08.451 ], 00:19:08.451 "product_name": "Raid Volume", 00:19:08.451 "block_size": 4096, 00:19:08.451 "num_blocks": 7936, 00:19:08.451 "uuid": "206d0573-a112-4a1f-9134-50b73cb0caff", 00:19:08.451 "md_size": 32, 00:19:08.451 "md_interleave": false, 00:19:08.451 "dif_type": 0, 00:19:08.451 "assigned_rate_limits": { 00:19:08.451 "rw_ios_per_sec": 0, 00:19:08.451 "rw_mbytes_per_sec": 0, 00:19:08.451 "r_mbytes_per_sec": 0, 00:19:08.451 "w_mbytes_per_sec": 0 00:19:08.451 }, 00:19:08.451 "claimed": false, 00:19:08.451 "zoned": false, 00:19:08.451 "supported_io_types": { 00:19:08.451 "read": true, 00:19:08.451 "write": true, 00:19:08.451 "unmap": false, 00:19:08.451 "flush": false, 00:19:08.451 "reset": true, 00:19:08.451 "nvme_admin": false, 00:19:08.451 "nvme_io": false, 00:19:08.451 "nvme_io_md": false, 00:19:08.451 "write_zeroes": true, 00:19:08.451 "zcopy": false, 00:19:08.451 "get_zone_info": false, 00:19:08.451 "zone_management": false, 00:19:08.451 "zone_append": false, 00:19:08.451 "compare": false, 00:19:08.451 "compare_and_write": false, 00:19:08.451 "abort": false, 00:19:08.451 "seek_hole": false, 00:19:08.451 "seek_data": false, 00:19:08.451 "copy": false, 00:19:08.451 "nvme_iov_md": false 00:19:08.451 }, 00:19:08.451 "memory_domains": [ 00:19:08.451 { 00:19:08.451 "dma_device_id": "system", 00:19:08.451 "dma_device_type": 1 00:19:08.451 }, 00:19:08.451 { 00:19:08.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.451 "dma_device_type": 2 00:19:08.451 }, 00:19:08.451 { 00:19:08.451 "dma_device_id": "system", 00:19:08.451 "dma_device_type": 1 00:19:08.451 }, 00:19:08.451 { 00:19:08.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.451 "dma_device_type": 2 00:19:08.451 } 00:19:08.451 ], 00:19:08.451 "driver_specific": { 00:19:08.451 "raid": { 00:19:08.451 "uuid": "206d0573-a112-4a1f-9134-50b73cb0caff", 00:19:08.451 "strip_size_kb": 0, 00:19:08.451 "state": "online", 00:19:08.451 "raid_level": "raid1", 00:19:08.451 "superblock": true, 00:19:08.451 "num_base_bdevs": 2, 00:19:08.451 "num_base_bdevs_discovered": 2, 00:19:08.451 "num_base_bdevs_operational": 2, 00:19:08.451 "base_bdevs_list": [ 00:19:08.451 { 00:19:08.451 "name": "BaseBdev1", 00:19:08.451 "uuid": "c9f638c5-9890-4578-b84e-140f7de33c6d", 00:19:08.451 "is_configured": true, 00:19:08.451 "data_offset": 256, 00:19:08.451 "data_size": 7936 00:19:08.451 }, 00:19:08.451 { 00:19:08.451 "name": "BaseBdev2", 00:19:08.451 "uuid": "a519ef34-1dc8-41c5-8053-e7aff4cccaf8", 00:19:08.451 "is_configured": true, 00:19:08.451 "data_offset": 256, 00:19:08.451 "data_size": 7936 00:19:08.451 } 00:19:08.451 ] 00:19:08.451 } 00:19:08.451 } 00:19:08.451 }' 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:08.451 BaseBdev2' 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.451 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.709 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:08.709 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:08.709 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.709 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 [2024-11-19 10:13:22.740258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.710 "name": "Existed_Raid", 00:19:08.710 "uuid": "206d0573-a112-4a1f-9134-50b73cb0caff", 00:19:08.710 "strip_size_kb": 0, 00:19:08.710 "state": "online", 00:19:08.710 "raid_level": "raid1", 00:19:08.710 "superblock": true, 00:19:08.710 "num_base_bdevs": 2, 00:19:08.710 "num_base_bdevs_discovered": 1, 00:19:08.710 "num_base_bdevs_operational": 1, 00:19:08.710 "base_bdevs_list": [ 00:19:08.710 { 00:19:08.710 "name": null, 00:19:08.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.710 "is_configured": false, 00:19:08.710 "data_offset": 0, 00:19:08.710 "data_size": 7936 00:19:08.710 }, 00:19:08.710 { 00:19:08.710 "name": "BaseBdev2", 00:19:08.710 "uuid": "a519ef34-1dc8-41c5-8053-e7aff4cccaf8", 00:19:08.710 "is_configured": true, 00:19:08.710 "data_offset": 256, 00:19:08.710 "data_size": 7936 00:19:08.710 } 00:19:08.710 ] 00:19:08.710 }' 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.710 10:13:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.278 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.278 [2024-11-19 10:13:23.417888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.278 [2024-11-19 10:13:23.418053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.538 [2024-11-19 10:13:23.519995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.538 [2024-11-19 10:13:23.520324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.538 [2024-11-19 10:13:23.520497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87641 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87641 ']' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87641 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87641 00:19:09.538 killing process with pid 87641 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87641' 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87641 00:19:09.538 10:13:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87641 00:19:09.538 [2024-11-19 10:13:23.611413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.538 [2024-11-19 10:13:23.626921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.915 10:13:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:10.915 00:19:10.915 real 0m5.602s 00:19:10.915 user 0m8.296s 00:19:10.915 sys 0m0.870s 00:19:10.915 ************************************ 00:19:10.915 END TEST raid_state_function_test_sb_md_separate 00:19:10.915 ************************************ 00:19:10.915 10:13:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.915 10:13:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.915 10:13:24 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:10.915 10:13:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:10.915 10:13:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.915 10:13:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.915 ************************************ 00:19:10.915 START TEST raid_superblock_test_md_separate 00:19:10.915 ************************************ 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:10.915 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87897 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87897 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87897 ']' 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.916 10:13:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.916 [2024-11-19 10:13:24.923497] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:10.916 [2024-11-19 10:13:24.923702] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87897 ] 00:19:10.916 [2024-11-19 10:13:25.115150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.174 [2024-11-19 10:13:25.260682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.432 [2024-11-19 10:13:25.482622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.432 [2024-11-19 10:13:25.482688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.691 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.953 malloc1 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.953 [2024-11-19 10:13:25.964293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:11.953 [2024-11-19 10:13:25.964420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.953 [2024-11-19 10:13:25.964455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.953 [2024-11-19 10:13:25.964471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.953 [2024-11-19 10:13:25.967358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.953 [2024-11-19 10:13:25.967411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:11.953 pt1 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.953 10:13:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.953 malloc2 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.953 [2024-11-19 10:13:26.025385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:11.953 [2024-11-19 10:13:26.025505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.953 [2024-11-19 10:13:26.025542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.953 [2024-11-19 10:13:26.025557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.953 [2024-11-19 10:13:26.028344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.953 [2024-11-19 10:13:26.028625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:11.953 pt2 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.953 [2024-11-19 10:13:26.037566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:11.953 [2024-11-19 10:13:26.040245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.953 [2024-11-19 10:13:26.040538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:11.953 [2024-11-19 10:13:26.040561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:11.953 [2024-11-19 10:13:26.040704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:11.953 [2024-11-19 10:13:26.040907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:11.953 [2024-11-19 10:13:26.040929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:11.953 [2024-11-19 10:13:26.041090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.953 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.954 "name": "raid_bdev1", 00:19:11.954 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:11.954 "strip_size_kb": 0, 00:19:11.954 "state": "online", 00:19:11.954 "raid_level": "raid1", 00:19:11.954 "superblock": true, 00:19:11.954 "num_base_bdevs": 2, 00:19:11.954 "num_base_bdevs_discovered": 2, 00:19:11.954 "num_base_bdevs_operational": 2, 00:19:11.954 "base_bdevs_list": [ 00:19:11.954 { 00:19:11.954 "name": "pt1", 00:19:11.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.954 "is_configured": true, 00:19:11.954 "data_offset": 256, 00:19:11.954 "data_size": 7936 00:19:11.954 }, 00:19:11.954 { 00:19:11.954 "name": "pt2", 00:19:11.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.954 "is_configured": true, 00:19:11.954 "data_offset": 256, 00:19:11.954 "data_size": 7936 00:19:11.954 } 00:19:11.954 ] 00:19:11.954 }' 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.954 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.523 [2024-11-19 10:13:26.558025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.523 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.523 "name": "raid_bdev1", 00:19:12.523 "aliases": [ 00:19:12.523 "4ea036b1-b107-4435-a28b-9aaa64e8bb6e" 00:19:12.523 ], 00:19:12.523 "product_name": "Raid Volume", 00:19:12.523 "block_size": 4096, 00:19:12.523 "num_blocks": 7936, 00:19:12.524 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:12.524 "md_size": 32, 00:19:12.524 "md_interleave": false, 00:19:12.524 "dif_type": 0, 00:19:12.524 "assigned_rate_limits": { 00:19:12.524 "rw_ios_per_sec": 0, 00:19:12.524 "rw_mbytes_per_sec": 0, 00:19:12.524 "r_mbytes_per_sec": 0, 00:19:12.524 "w_mbytes_per_sec": 0 00:19:12.524 }, 00:19:12.524 "claimed": false, 00:19:12.524 "zoned": false, 00:19:12.524 "supported_io_types": { 00:19:12.524 "read": true, 00:19:12.524 "write": true, 00:19:12.524 "unmap": false, 00:19:12.524 "flush": false, 00:19:12.524 "reset": true, 00:19:12.524 "nvme_admin": false, 00:19:12.524 "nvme_io": false, 00:19:12.524 "nvme_io_md": false, 00:19:12.524 "write_zeroes": true, 00:19:12.524 "zcopy": false, 00:19:12.524 "get_zone_info": false, 00:19:12.524 "zone_management": false, 00:19:12.524 "zone_append": false, 00:19:12.524 "compare": false, 00:19:12.524 "compare_and_write": false, 00:19:12.524 "abort": false, 00:19:12.524 "seek_hole": false, 00:19:12.524 "seek_data": false, 00:19:12.524 "copy": false, 00:19:12.524 "nvme_iov_md": false 00:19:12.524 }, 00:19:12.524 "memory_domains": [ 00:19:12.524 { 00:19:12.524 "dma_device_id": "system", 00:19:12.524 "dma_device_type": 1 00:19:12.524 }, 00:19:12.524 { 00:19:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.524 "dma_device_type": 2 00:19:12.524 }, 00:19:12.524 { 00:19:12.524 "dma_device_id": "system", 00:19:12.524 "dma_device_type": 1 00:19:12.524 }, 00:19:12.524 { 00:19:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.524 "dma_device_type": 2 00:19:12.524 } 00:19:12.524 ], 00:19:12.524 "driver_specific": { 00:19:12.524 "raid": { 00:19:12.524 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:12.524 "strip_size_kb": 0, 00:19:12.524 "state": "online", 00:19:12.524 "raid_level": "raid1", 00:19:12.524 "superblock": true, 00:19:12.524 "num_base_bdevs": 2, 00:19:12.524 "num_base_bdevs_discovered": 2, 00:19:12.524 "num_base_bdevs_operational": 2, 00:19:12.524 "base_bdevs_list": [ 00:19:12.524 { 00:19:12.524 "name": "pt1", 00:19:12.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:12.524 "is_configured": true, 00:19:12.524 "data_offset": 256, 00:19:12.524 "data_size": 7936 00:19:12.524 }, 00:19:12.524 { 00:19:12.524 "name": "pt2", 00:19:12.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.524 "is_configured": true, 00:19:12.524 "data_offset": 256, 00:19:12.524 "data_size": 7936 00:19:12.524 } 00:19:12.524 ] 00:19:12.524 } 00:19:12.524 } 00:19:12.524 }' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:12.524 pt2' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.524 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:12.784 [2024-11-19 10:13:26.802023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4ea036b1-b107-4435-a28b-9aaa64e8bb6e 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 4ea036b1-b107-4435-a28b-9aaa64e8bb6e ']' 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.784 [2024-11-19 10:13:26.849669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.784 [2024-11-19 10:13:26.849886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.784 [2024-11-19 10:13:26.850135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.784 [2024-11-19 10:13:26.850327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.784 [2024-11-19 10:13:26.850491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:12.784 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.785 [2024-11-19 10:13:26.989751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:12.785 [2024-11-19 10:13:26.992660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:12.785 [2024-11-19 10:13:26.992904] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:12.785 [2024-11-19 10:13:26.993120] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:12.785 [2024-11-19 10:13:26.993295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.785 [2024-11-19 10:13:26.993348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:12.785 request: 00:19:12.785 { 00:19:12.785 "name": "raid_bdev1", 00:19:12.785 "raid_level": "raid1", 00:19:12.785 "base_bdevs": [ 00:19:12.785 "malloc1", 00:19:12.785 "malloc2" 00:19:12.785 ], 00:19:12.785 "superblock": false, 00:19:12.785 "method": "bdev_raid_create", 00:19:12.785 "req_id": 1 00:19:12.785 } 00:19:12.785 Got JSON-RPC error response 00:19:12.785 response: 00:19:12.785 { 00:19:12.785 "code": -17, 00:19:12.785 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:12.785 } 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.785 10:13:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.785 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:12.785 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.785 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.785 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.785 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.048 [2024-11-19 10:13:27.049843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:13.048 [2024-11-19 10:13:27.049938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.048 [2024-11-19 10:13:27.049969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:13.048 [2024-11-19 10:13:27.049987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.048 [2024-11-19 10:13:27.052920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.048 [2024-11-19 10:13:27.052971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:13.048 [2024-11-19 10:13:27.053056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:13.048 [2024-11-19 10:13:27.053138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:13.048 pt1 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.048 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.049 "name": "raid_bdev1", 00:19:13.049 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:13.049 "strip_size_kb": 0, 00:19:13.049 "state": "configuring", 00:19:13.049 "raid_level": "raid1", 00:19:13.049 "superblock": true, 00:19:13.049 "num_base_bdevs": 2, 00:19:13.049 "num_base_bdevs_discovered": 1, 00:19:13.049 "num_base_bdevs_operational": 2, 00:19:13.049 "base_bdevs_list": [ 00:19:13.049 { 00:19:13.049 "name": "pt1", 00:19:13.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.049 "is_configured": true, 00:19:13.049 "data_offset": 256, 00:19:13.049 "data_size": 7936 00:19:13.049 }, 00:19:13.049 { 00:19:13.049 "name": null, 00:19:13.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.049 "is_configured": false, 00:19:13.049 "data_offset": 256, 00:19:13.049 "data_size": 7936 00:19:13.049 } 00:19:13.049 ] 00:19:13.049 }' 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.049 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.627 [2024-11-19 10:13:27.617959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:13.627 [2024-11-19 10:13:27.618072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.627 [2024-11-19 10:13:27.618106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:13.627 [2024-11-19 10:13:27.618125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.627 [2024-11-19 10:13:27.618455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.627 [2024-11-19 10:13:27.618492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:13.627 [2024-11-19 10:13:27.618569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:13.627 [2024-11-19 10:13:27.618613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:13.627 [2024-11-19 10:13:27.618766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:13.627 [2024-11-19 10:13:27.618808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:13.627 [2024-11-19 10:13:27.618903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:13.627 [2024-11-19 10:13:27.619056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:13.627 [2024-11-19 10:13:27.619070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:13.627 [2024-11-19 10:13:27.619200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.627 pt2 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.627 "name": "raid_bdev1", 00:19:13.627 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:13.627 "strip_size_kb": 0, 00:19:13.627 "state": "online", 00:19:13.627 "raid_level": "raid1", 00:19:13.627 "superblock": true, 00:19:13.627 "num_base_bdevs": 2, 00:19:13.627 "num_base_bdevs_discovered": 2, 00:19:13.627 "num_base_bdevs_operational": 2, 00:19:13.627 "base_bdevs_list": [ 00:19:13.627 { 00:19:13.627 "name": "pt1", 00:19:13.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.627 "is_configured": true, 00:19:13.627 "data_offset": 256, 00:19:13.627 "data_size": 7936 00:19:13.627 }, 00:19:13.627 { 00:19:13.627 "name": "pt2", 00:19:13.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.627 "is_configured": true, 00:19:13.627 "data_offset": 256, 00:19:13.627 "data_size": 7936 00:19:13.627 } 00:19:13.627 ] 00:19:13.627 }' 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.627 10:13:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.193 [2024-11-19 10:13:28.142485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.193 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.193 "name": "raid_bdev1", 00:19:14.193 "aliases": [ 00:19:14.193 "4ea036b1-b107-4435-a28b-9aaa64e8bb6e" 00:19:14.193 ], 00:19:14.193 "product_name": "Raid Volume", 00:19:14.193 "block_size": 4096, 00:19:14.193 "num_blocks": 7936, 00:19:14.193 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:14.193 "md_size": 32, 00:19:14.193 "md_interleave": false, 00:19:14.193 "dif_type": 0, 00:19:14.194 "assigned_rate_limits": { 00:19:14.194 "rw_ios_per_sec": 0, 00:19:14.194 "rw_mbytes_per_sec": 0, 00:19:14.194 "r_mbytes_per_sec": 0, 00:19:14.194 "w_mbytes_per_sec": 0 00:19:14.194 }, 00:19:14.194 "claimed": false, 00:19:14.194 "zoned": false, 00:19:14.194 "supported_io_types": { 00:19:14.194 "read": true, 00:19:14.194 "write": true, 00:19:14.194 "unmap": false, 00:19:14.194 "flush": false, 00:19:14.194 "reset": true, 00:19:14.194 "nvme_admin": false, 00:19:14.194 "nvme_io": false, 00:19:14.194 "nvme_io_md": false, 00:19:14.194 "write_zeroes": true, 00:19:14.194 "zcopy": false, 00:19:14.194 "get_zone_info": false, 00:19:14.194 "zone_management": false, 00:19:14.194 "zone_append": false, 00:19:14.194 "compare": false, 00:19:14.194 "compare_and_write": false, 00:19:14.194 "abort": false, 00:19:14.194 "seek_hole": false, 00:19:14.194 "seek_data": false, 00:19:14.194 "copy": false, 00:19:14.194 "nvme_iov_md": false 00:19:14.194 }, 00:19:14.194 "memory_domains": [ 00:19:14.194 { 00:19:14.194 "dma_device_id": "system", 00:19:14.194 "dma_device_type": 1 00:19:14.194 }, 00:19:14.194 { 00:19:14.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.194 "dma_device_type": 2 00:19:14.194 }, 00:19:14.194 { 00:19:14.194 "dma_device_id": "system", 00:19:14.194 "dma_device_type": 1 00:19:14.194 }, 00:19:14.194 { 00:19:14.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.194 "dma_device_type": 2 00:19:14.194 } 00:19:14.194 ], 00:19:14.194 "driver_specific": { 00:19:14.194 "raid": { 00:19:14.194 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:14.194 "strip_size_kb": 0, 00:19:14.194 "state": "online", 00:19:14.194 "raid_level": "raid1", 00:19:14.194 "superblock": true, 00:19:14.194 "num_base_bdevs": 2, 00:19:14.194 "num_base_bdevs_discovered": 2, 00:19:14.194 "num_base_bdevs_operational": 2, 00:19:14.194 "base_bdevs_list": [ 00:19:14.194 { 00:19:14.194 "name": "pt1", 00:19:14.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:14.194 "is_configured": true, 00:19:14.194 "data_offset": 256, 00:19:14.194 "data_size": 7936 00:19:14.194 }, 00:19:14.194 { 00:19:14.194 "name": "pt2", 00:19:14.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.194 "is_configured": true, 00:19:14.194 "data_offset": 256, 00:19:14.194 "data_size": 7936 00:19:14.194 } 00:19:14.194 ] 00:19:14.194 } 00:19:14.194 } 00:19:14.194 }' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:14.194 pt2' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.194 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.194 [2024-11-19 10:13:28.410613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 4ea036b1-b107-4435-a28b-9aaa64e8bb6e '!=' 4ea036b1-b107-4435-a28b-9aaa64e8bb6e ']' 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.453 [2024-11-19 10:13:28.466327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.453 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.453 "name": "raid_bdev1", 00:19:14.453 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:14.454 "strip_size_kb": 0, 00:19:14.454 "state": "online", 00:19:14.454 "raid_level": "raid1", 00:19:14.454 "superblock": true, 00:19:14.454 "num_base_bdevs": 2, 00:19:14.454 "num_base_bdevs_discovered": 1, 00:19:14.454 "num_base_bdevs_operational": 1, 00:19:14.454 "base_bdevs_list": [ 00:19:14.454 { 00:19:14.454 "name": null, 00:19:14.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.454 "is_configured": false, 00:19:14.454 "data_offset": 0, 00:19:14.454 "data_size": 7936 00:19:14.454 }, 00:19:14.454 { 00:19:14.454 "name": "pt2", 00:19:14.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.454 "is_configured": true, 00:19:14.454 "data_offset": 256, 00:19:14.454 "data_size": 7936 00:19:14.454 } 00:19:14.454 ] 00:19:14.454 }' 00:19:14.454 10:13:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.454 10:13:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 [2024-11-19 10:13:29.014363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.021 [2024-11-19 10:13:29.014402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.021 [2024-11-19 10:13:29.014517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.021 [2024-11-19 10:13:29.014602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.021 [2024-11-19 10:13:29.014622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.022 [2024-11-19 10:13:29.086395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:15.022 [2024-11-19 10:13:29.086650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.022 [2024-11-19 10:13:29.086697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:15.022 [2024-11-19 10:13:29.086717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.022 [2024-11-19 10:13:29.089698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.022 [2024-11-19 10:13:29.089884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:15.022 [2024-11-19 10:13:29.089981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:15.022 [2024-11-19 10:13:29.090055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:15.022 [2024-11-19 10:13:29.090193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:15.022 [2024-11-19 10:13:29.090219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:15.022 [2024-11-19 10:13:29.090323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:15.022 [2024-11-19 10:13:29.090472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:15.022 [2024-11-19 10:13:29.090489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:15.022 [2024-11-19 10:13:29.090685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.022 pt2 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.022 "name": "raid_bdev1", 00:19:15.022 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:15.022 "strip_size_kb": 0, 00:19:15.022 "state": "online", 00:19:15.022 "raid_level": "raid1", 00:19:15.022 "superblock": true, 00:19:15.022 "num_base_bdevs": 2, 00:19:15.022 "num_base_bdevs_discovered": 1, 00:19:15.022 "num_base_bdevs_operational": 1, 00:19:15.022 "base_bdevs_list": [ 00:19:15.022 { 00:19:15.022 "name": null, 00:19:15.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.022 "is_configured": false, 00:19:15.022 "data_offset": 256, 00:19:15.022 "data_size": 7936 00:19:15.022 }, 00:19:15.022 { 00:19:15.022 "name": "pt2", 00:19:15.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:15.022 "is_configured": true, 00:19:15.022 "data_offset": 256, 00:19:15.022 "data_size": 7936 00:19:15.022 } 00:19:15.022 ] 00:19:15.022 }' 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.022 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.589 [2024-11-19 10:13:29.638814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.589 [2024-11-19 10:13:29.638859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.589 [2024-11-19 10:13:29.638969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.589 [2024-11-19 10:13:29.639046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.589 [2024-11-19 10:13:29.639062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.589 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.589 [2024-11-19 10:13:29.702892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:15.589 [2024-11-19 10:13:29.703117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.589 [2024-11-19 10:13:29.703282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:15.589 [2024-11-19 10:13:29.703435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.589 [2024-11-19 10:13:29.706461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.590 [2024-11-19 10:13:29.706614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:15.590 [2024-11-19 10:13:29.706823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:15.590 [2024-11-19 10:13:29.706996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:15.590 [2024-11-19 10:13:29.707257] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:15.590 [2024-11-19 10:13:29.707276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.590 [2024-11-19 10:13:29.707305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:15.590 [2024-11-19 10:13:29.707387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:15.590 [2024-11-19 10:13:29.707496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:15.590 [2024-11-19 10:13:29.707512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:15.590 pt1 00:19:15.590 [2024-11-19 10:13:29.707613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:15.590 [2024-11-19 10:13:29.707765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:15.590 [2024-11-19 10:13:29.707812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:15.590 [2024-11-19 10:13:29.707952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.590 "name": "raid_bdev1", 00:19:15.590 "uuid": "4ea036b1-b107-4435-a28b-9aaa64e8bb6e", 00:19:15.590 "strip_size_kb": 0, 00:19:15.590 "state": "online", 00:19:15.590 "raid_level": "raid1", 00:19:15.590 "superblock": true, 00:19:15.590 "num_base_bdevs": 2, 00:19:15.590 "num_base_bdevs_discovered": 1, 00:19:15.590 "num_base_bdevs_operational": 1, 00:19:15.590 "base_bdevs_list": [ 00:19:15.590 { 00:19:15.590 "name": null, 00:19:15.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.590 "is_configured": false, 00:19:15.590 "data_offset": 256, 00:19:15.590 "data_size": 7936 00:19:15.590 }, 00:19:15.590 { 00:19:15.590 "name": "pt2", 00:19:15.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:15.590 "is_configured": true, 00:19:15.590 "data_offset": 256, 00:19:15.590 "data_size": 7936 00:19:15.590 } 00:19:15.590 ] 00:19:15.590 }' 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.590 10:13:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.157 [2024-11-19 10:13:30.279426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 4ea036b1-b107-4435-a28b-9aaa64e8bb6e '!=' 4ea036b1-b107-4435-a28b-9aaa64e8bb6e ']' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87897 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87897 ']' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87897 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87897 00:19:16.157 killing process with pid 87897 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87897' 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87897 00:19:16.157 [2024-11-19 10:13:30.353456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.157 10:13:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87897 00:19:16.157 [2024-11-19 10:13:30.353592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.157 [2024-11-19 10:13:30.353673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.157 [2024-11-19 10:13:30.353702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:16.415 [2024-11-19 10:13:30.574523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.791 ************************************ 00:19:17.791 END TEST raid_superblock_test_md_separate 00:19:17.791 ************************************ 00:19:17.791 10:13:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:17.791 00:19:17.791 real 0m6.901s 00:19:17.791 user 0m10.772s 00:19:17.791 sys 0m1.068s 00:19:17.791 10:13:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.791 10:13:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.791 10:13:31 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:17.791 10:13:31 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:17.791 10:13:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:17.791 10:13:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.791 10:13:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.791 ************************************ 00:19:17.791 START TEST raid_rebuild_test_sb_md_separate 00:19:17.791 ************************************ 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88231 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88231 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88231 ']' 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.791 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.792 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.792 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.792 10:13:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.792 [2024-11-19 10:13:31.858465] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:17.792 [2024-11-19 10:13:31.858909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88231 ] 00:19:17.792 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:17.792 Zero copy mechanism will not be used. 00:19:18.050 [2024-11-19 10:13:32.036182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.050 [2024-11-19 10:13:32.186223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.308 [2024-11-19 10:13:32.414066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.308 [2024-11-19 10:13:32.414426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 BaseBdev1_malloc 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 [2024-11-19 10:13:32.952236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:18.877 [2024-11-19 10:13:32.952489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.877 [2024-11-19 10:13:32.952572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:18.877 [2024-11-19 10:13:32.952607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.877 [2024-11-19 10:13:32.955414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.877 [2024-11-19 10:13:32.955465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:18.877 BaseBdev1 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 BaseBdev2_malloc 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 [2024-11-19 10:13:33.013504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:18.877 [2024-11-19 10:13:33.013741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.877 [2024-11-19 10:13:33.013799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:18.877 [2024-11-19 10:13:33.013825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.877 [2024-11-19 10:13:33.016545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.877 BaseBdev2 00:19:18.877 [2024-11-19 10:13:33.016712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 spare_malloc 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 spare_delay 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 [2024-11-19 10:13:33.091540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.877 [2024-11-19 10:13:33.091767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.877 [2024-11-19 10:13:33.091870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:18.877 [2024-11-19 10:13:33.092013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.877 [2024-11-19 10:13:33.094859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.877 [2024-11-19 10:13:33.095021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.877 spare 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 [2024-11-19 10:13:33.099724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.877 [2024-11-19 10:13:33.102405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.877 [2024-11-19 10:13:33.102829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:18.877 [2024-11-19 10:13:33.102864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:18.877 [2024-11-19 10:13:33.102992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.877 [2024-11-19 10:13:33.103177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:18.877 [2024-11-19 10:13:33.103192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:18.877 [2024-11-19 10:13:33.103363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.877 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.878 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.878 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.136 "name": "raid_bdev1", 00:19:19.136 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:19.136 "strip_size_kb": 0, 00:19:19.136 "state": "online", 00:19:19.136 "raid_level": "raid1", 00:19:19.136 "superblock": true, 00:19:19.136 "num_base_bdevs": 2, 00:19:19.136 "num_base_bdevs_discovered": 2, 00:19:19.136 "num_base_bdevs_operational": 2, 00:19:19.136 "base_bdevs_list": [ 00:19:19.136 { 00:19:19.136 "name": "BaseBdev1", 00:19:19.136 "uuid": "76b662aa-e43a-5412-ae5f-312a5a9ca465", 00:19:19.136 "is_configured": true, 00:19:19.136 "data_offset": 256, 00:19:19.136 "data_size": 7936 00:19:19.136 }, 00:19:19.136 { 00:19:19.136 "name": "BaseBdev2", 00:19:19.136 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:19.136 "is_configured": true, 00:19:19.136 "data_offset": 256, 00:19:19.136 "data_size": 7936 00:19:19.136 } 00:19:19.136 ] 00:19:19.136 }' 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.136 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.702 [2024-11-19 10:13:33.640359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:19.702 10:13:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:19.960 [2024-11-19 10:13:34.032104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:19.960 /dev/nbd0 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.960 1+0 records in 00:19:19.960 1+0 records out 00:19:19.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388342 s, 10.5 MB/s 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:19.960 10:13:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:20.893 7936+0 records in 00:19:20.893 7936+0 records out 00:19:20.893 32505856 bytes (33 MB, 31 MiB) copied, 0.921045 s, 35.3 MB/s 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.893 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:21.150 [2024-11-19 10:13:35.358004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.150 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.150 [2024-11-19 10:13:35.382151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.408 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.409 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.409 "name": "raid_bdev1", 00:19:21.409 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:21.409 "strip_size_kb": 0, 00:19:21.409 "state": "online", 00:19:21.409 "raid_level": "raid1", 00:19:21.409 "superblock": true, 00:19:21.409 "num_base_bdevs": 2, 00:19:21.409 "num_base_bdevs_discovered": 1, 00:19:21.409 "num_base_bdevs_operational": 1, 00:19:21.409 "base_bdevs_list": [ 00:19:21.409 { 00:19:21.409 "name": null, 00:19:21.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.409 "is_configured": false, 00:19:21.409 "data_offset": 0, 00:19:21.409 "data_size": 7936 00:19:21.409 }, 00:19:21.409 { 00:19:21.409 "name": "BaseBdev2", 00:19:21.409 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:21.409 "is_configured": true, 00:19:21.409 "data_offset": 256, 00:19:21.409 "data_size": 7936 00:19:21.409 } 00:19:21.409 ] 00:19:21.409 }' 00:19:21.409 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.409 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.973 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:21.973 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.973 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.973 [2024-11-19 10:13:35.938301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.973 [2024-11-19 10:13:35.952404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:21.973 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.973 10:13:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:21.973 [2024-11-19 10:13:35.955164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.966 10:13:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.966 "name": "raid_bdev1", 00:19:22.966 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:22.966 "strip_size_kb": 0, 00:19:22.966 "state": "online", 00:19:22.966 "raid_level": "raid1", 00:19:22.966 "superblock": true, 00:19:22.966 "num_base_bdevs": 2, 00:19:22.966 "num_base_bdevs_discovered": 2, 00:19:22.966 "num_base_bdevs_operational": 2, 00:19:22.966 "process": { 00:19:22.966 "type": "rebuild", 00:19:22.966 "target": "spare", 00:19:22.966 "progress": { 00:19:22.966 "blocks": 2560, 00:19:22.966 "percent": 32 00:19:22.966 } 00:19:22.966 }, 00:19:22.966 "base_bdevs_list": [ 00:19:22.966 { 00:19:22.966 "name": "spare", 00:19:22.966 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:22.966 "is_configured": true, 00:19:22.966 "data_offset": 256, 00:19:22.966 "data_size": 7936 00:19:22.966 }, 00:19:22.966 { 00:19:22.966 "name": "BaseBdev2", 00:19:22.966 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:22.966 "is_configured": true, 00:19:22.966 "data_offset": 256, 00:19:22.966 "data_size": 7936 00:19:22.966 } 00:19:22.966 ] 00:19:22.966 }' 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.966 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.966 [2024-11-19 10:13:37.133738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.966 [2024-11-19 10:13:37.167677] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.966 [2024-11-19 10:13:37.167825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.966 [2024-11-19 10:13:37.167858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.966 [2024-11-19 10:13:37.167875] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.242 "name": "raid_bdev1", 00:19:23.242 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:23.242 "strip_size_kb": 0, 00:19:23.242 "state": "online", 00:19:23.242 "raid_level": "raid1", 00:19:23.242 "superblock": true, 00:19:23.242 "num_base_bdevs": 2, 00:19:23.242 "num_base_bdevs_discovered": 1, 00:19:23.242 "num_base_bdevs_operational": 1, 00:19:23.242 "base_bdevs_list": [ 00:19:23.242 { 00:19:23.242 "name": null, 00:19:23.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.242 "is_configured": false, 00:19:23.242 "data_offset": 0, 00:19:23.242 "data_size": 7936 00:19:23.242 }, 00:19:23.242 { 00:19:23.242 "name": "BaseBdev2", 00:19:23.242 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:23.242 "is_configured": true, 00:19:23.242 "data_offset": 256, 00:19:23.242 "data_size": 7936 00:19:23.242 } 00:19:23.242 ] 00:19:23.242 }' 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.242 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.501 "name": "raid_bdev1", 00:19:23.501 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:23.501 "strip_size_kb": 0, 00:19:23.501 "state": "online", 00:19:23.501 "raid_level": "raid1", 00:19:23.501 "superblock": true, 00:19:23.501 "num_base_bdevs": 2, 00:19:23.501 "num_base_bdevs_discovered": 1, 00:19:23.501 "num_base_bdevs_operational": 1, 00:19:23.501 "base_bdevs_list": [ 00:19:23.501 { 00:19:23.501 "name": null, 00:19:23.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.501 "is_configured": false, 00:19:23.501 "data_offset": 0, 00:19:23.501 "data_size": 7936 00:19:23.501 }, 00:19:23.501 { 00:19:23.501 "name": "BaseBdev2", 00:19:23.501 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:23.501 "is_configured": true, 00:19:23.501 "data_offset": 256, 00:19:23.501 "data_size": 7936 00:19:23.501 } 00:19:23.501 ] 00:19:23.501 }' 00:19:23.501 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.759 [2024-11-19 10:13:37.841079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.759 [2024-11-19 10:13:37.854291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.759 10:13:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:23.759 [2024-11-19 10:13:37.857041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.694 "name": "raid_bdev1", 00:19:24.694 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:24.694 "strip_size_kb": 0, 00:19:24.694 "state": "online", 00:19:24.694 "raid_level": "raid1", 00:19:24.694 "superblock": true, 00:19:24.694 "num_base_bdevs": 2, 00:19:24.694 "num_base_bdevs_discovered": 2, 00:19:24.694 "num_base_bdevs_operational": 2, 00:19:24.694 "process": { 00:19:24.694 "type": "rebuild", 00:19:24.694 "target": "spare", 00:19:24.694 "progress": { 00:19:24.694 "blocks": 2304, 00:19:24.694 "percent": 29 00:19:24.694 } 00:19:24.694 }, 00:19:24.694 "base_bdevs_list": [ 00:19:24.694 { 00:19:24.694 "name": "spare", 00:19:24.694 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:24.694 "is_configured": true, 00:19:24.694 "data_offset": 256, 00:19:24.694 "data_size": 7936 00:19:24.694 }, 00:19:24.694 { 00:19:24.694 "name": "BaseBdev2", 00:19:24.694 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:24.694 "is_configured": true, 00:19:24.694 "data_offset": 256, 00:19:24.694 "data_size": 7936 00:19:24.694 } 00:19:24.694 ] 00:19:24.694 }' 00:19:24.694 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.952 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.952 10:13:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:24.952 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=788 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.952 "name": "raid_bdev1", 00:19:24.952 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:24.952 "strip_size_kb": 0, 00:19:24.952 "state": "online", 00:19:24.952 "raid_level": "raid1", 00:19:24.952 "superblock": true, 00:19:24.952 "num_base_bdevs": 2, 00:19:24.952 "num_base_bdevs_discovered": 2, 00:19:24.952 "num_base_bdevs_operational": 2, 00:19:24.952 "process": { 00:19:24.952 "type": "rebuild", 00:19:24.952 "target": "spare", 00:19:24.952 "progress": { 00:19:24.952 "blocks": 2816, 00:19:24.952 "percent": 35 00:19:24.952 } 00:19:24.952 }, 00:19:24.952 "base_bdevs_list": [ 00:19:24.952 { 00:19:24.952 "name": "spare", 00:19:24.952 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:24.952 "is_configured": true, 00:19:24.952 "data_offset": 256, 00:19:24.952 "data_size": 7936 00:19:24.952 }, 00:19:24.952 { 00:19:24.952 "name": "BaseBdev2", 00:19:24.952 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:24.952 "is_configured": true, 00:19:24.952 "data_offset": 256, 00:19:24.952 "data_size": 7936 00:19:24.952 } 00:19:24.952 ] 00:19:24.952 }' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.952 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.209 10:13:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.142 "name": "raid_bdev1", 00:19:26.142 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:26.142 "strip_size_kb": 0, 00:19:26.142 "state": "online", 00:19:26.142 "raid_level": "raid1", 00:19:26.142 "superblock": true, 00:19:26.142 "num_base_bdevs": 2, 00:19:26.142 "num_base_bdevs_discovered": 2, 00:19:26.142 "num_base_bdevs_operational": 2, 00:19:26.142 "process": { 00:19:26.142 "type": "rebuild", 00:19:26.142 "target": "spare", 00:19:26.142 "progress": { 00:19:26.142 "blocks": 5888, 00:19:26.142 "percent": 74 00:19:26.142 } 00:19:26.142 }, 00:19:26.142 "base_bdevs_list": [ 00:19:26.142 { 00:19:26.142 "name": "spare", 00:19:26.142 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:26.142 "is_configured": true, 00:19:26.142 "data_offset": 256, 00:19:26.142 "data_size": 7936 00:19:26.142 }, 00:19:26.142 { 00:19:26.142 "name": "BaseBdev2", 00:19:26.142 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:26.142 "is_configured": true, 00:19:26.142 "data_offset": 256, 00:19:26.142 "data_size": 7936 00:19:26.142 } 00:19:26.142 ] 00:19:26.142 }' 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.142 10:13:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:27.077 [2024-11-19 10:13:40.987202] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:27.077 [2024-11-19 10:13:40.987355] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:27.077 [2024-11-19 10:13:40.987551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.336 "name": "raid_bdev1", 00:19:27.336 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:27.336 "strip_size_kb": 0, 00:19:27.336 "state": "online", 00:19:27.336 "raid_level": "raid1", 00:19:27.336 "superblock": true, 00:19:27.336 "num_base_bdevs": 2, 00:19:27.336 "num_base_bdevs_discovered": 2, 00:19:27.336 "num_base_bdevs_operational": 2, 00:19:27.336 "base_bdevs_list": [ 00:19:27.336 { 00:19:27.336 "name": "spare", 00:19:27.336 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:27.336 "is_configured": true, 00:19:27.336 "data_offset": 256, 00:19:27.336 "data_size": 7936 00:19:27.336 }, 00:19:27.336 { 00:19:27.336 "name": "BaseBdev2", 00:19:27.336 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:27.336 "is_configured": true, 00:19:27.336 "data_offset": 256, 00:19:27.336 "data_size": 7936 00:19:27.336 } 00:19:27.336 ] 00:19:27.336 }' 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.336 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.336 "name": "raid_bdev1", 00:19:27.336 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:27.336 "strip_size_kb": 0, 00:19:27.336 "state": "online", 00:19:27.336 "raid_level": "raid1", 00:19:27.336 "superblock": true, 00:19:27.336 "num_base_bdevs": 2, 00:19:27.336 "num_base_bdevs_discovered": 2, 00:19:27.336 "num_base_bdevs_operational": 2, 00:19:27.336 "base_bdevs_list": [ 00:19:27.336 { 00:19:27.336 "name": "spare", 00:19:27.336 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:27.336 "is_configured": true, 00:19:27.336 "data_offset": 256, 00:19:27.336 "data_size": 7936 00:19:27.336 }, 00:19:27.336 { 00:19:27.336 "name": "BaseBdev2", 00:19:27.336 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:27.336 "is_configured": true, 00:19:27.336 "data_offset": 256, 00:19:27.336 "data_size": 7936 00:19:27.336 } 00:19:27.336 ] 00:19:27.336 }' 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.595 "name": "raid_bdev1", 00:19:27.595 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:27.595 "strip_size_kb": 0, 00:19:27.595 "state": "online", 00:19:27.595 "raid_level": "raid1", 00:19:27.595 "superblock": true, 00:19:27.595 "num_base_bdevs": 2, 00:19:27.595 "num_base_bdevs_discovered": 2, 00:19:27.595 "num_base_bdevs_operational": 2, 00:19:27.595 "base_bdevs_list": [ 00:19:27.595 { 00:19:27.595 "name": "spare", 00:19:27.595 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:27.595 "is_configured": true, 00:19:27.595 "data_offset": 256, 00:19:27.595 "data_size": 7936 00:19:27.595 }, 00:19:27.595 { 00:19:27.595 "name": "BaseBdev2", 00:19:27.595 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:27.595 "is_configured": true, 00:19:27.595 "data_offset": 256, 00:19:27.595 "data_size": 7936 00:19:27.595 } 00:19:27.595 ] 00:19:27.595 }' 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.595 10:13:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.161 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:28.161 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.161 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.161 [2024-11-19 10:13:42.188765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.161 [2024-11-19 10:13:42.188827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.161 [2024-11-19 10:13:42.188964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.161 [2024-11-19 10:13:42.189071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.162 [2024-11-19 10:13:42.189098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.162 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:28.420 /dev/nbd0 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.420 1+0 records in 00:19:28.420 1+0 records out 00:19:28.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394068 s, 10.4 MB/s 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.420 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:28.679 /dev/nbd1 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.679 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.938 1+0 records in 00:19:28.938 1+0 records out 00:19:28.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392573 s, 10.4 MB/s 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.938 10:13:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.938 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.196 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:29.455 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:29.713 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:29.713 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:29.713 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.714 [2024-11-19 10:13:43.710939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:29.714 [2024-11-19 10:13:43.711026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.714 [2024-11-19 10:13:43.711066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:29.714 [2024-11-19 10:13:43.711092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.714 [2024-11-19 10:13:43.714298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.714 [2024-11-19 10:13:43.714363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:29.714 [2024-11-19 10:13:43.714473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:29.714 [2024-11-19 10:13:43.714558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.714 [2024-11-19 10:13:43.714814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.714 spare 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.714 [2024-11-19 10:13:43.815042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:29.714 [2024-11-19 10:13:43.815143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:29.714 [2024-11-19 10:13:43.815358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:29.714 [2024-11-19 10:13:43.815624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:29.714 [2024-11-19 10:13:43.815658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:29.714 [2024-11-19 10:13:43.815882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.714 "name": "raid_bdev1", 00:19:29.714 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:29.714 "strip_size_kb": 0, 00:19:29.714 "state": "online", 00:19:29.714 "raid_level": "raid1", 00:19:29.714 "superblock": true, 00:19:29.714 "num_base_bdevs": 2, 00:19:29.714 "num_base_bdevs_discovered": 2, 00:19:29.714 "num_base_bdevs_operational": 2, 00:19:29.714 "base_bdevs_list": [ 00:19:29.714 { 00:19:29.714 "name": "spare", 00:19:29.714 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:29.714 "is_configured": true, 00:19:29.714 "data_offset": 256, 00:19:29.714 "data_size": 7936 00:19:29.714 }, 00:19:29.714 { 00:19:29.714 "name": "BaseBdev2", 00:19:29.714 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:29.714 "is_configured": true, 00:19:29.714 "data_offset": 256, 00:19:29.714 "data_size": 7936 00:19:29.714 } 00:19:29.714 ] 00:19:29.714 }' 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.714 10:13:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.283 "name": "raid_bdev1", 00:19:30.283 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:30.283 "strip_size_kb": 0, 00:19:30.283 "state": "online", 00:19:30.283 "raid_level": "raid1", 00:19:30.283 "superblock": true, 00:19:30.283 "num_base_bdevs": 2, 00:19:30.283 "num_base_bdevs_discovered": 2, 00:19:30.283 "num_base_bdevs_operational": 2, 00:19:30.283 "base_bdevs_list": [ 00:19:30.283 { 00:19:30.283 "name": "spare", 00:19:30.283 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:30.283 "is_configured": true, 00:19:30.283 "data_offset": 256, 00:19:30.283 "data_size": 7936 00:19:30.283 }, 00:19:30.283 { 00:19:30.283 "name": "BaseBdev2", 00:19:30.283 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:30.283 "is_configured": true, 00:19:30.283 "data_offset": 256, 00:19:30.283 "data_size": 7936 00:19:30.283 } 00:19:30.283 ] 00:19:30.283 }' 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.283 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.542 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.542 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:30.542 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 [2024-11-19 10:13:44.543284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.543 "name": "raid_bdev1", 00:19:30.543 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:30.543 "strip_size_kb": 0, 00:19:30.543 "state": "online", 00:19:30.543 "raid_level": "raid1", 00:19:30.543 "superblock": true, 00:19:30.543 "num_base_bdevs": 2, 00:19:30.543 "num_base_bdevs_discovered": 1, 00:19:30.543 "num_base_bdevs_operational": 1, 00:19:30.543 "base_bdevs_list": [ 00:19:30.543 { 00:19:30.543 "name": null, 00:19:30.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.543 "is_configured": false, 00:19:30.543 "data_offset": 0, 00:19:30.543 "data_size": 7936 00:19:30.543 }, 00:19:30.543 { 00:19:30.543 "name": "BaseBdev2", 00:19:30.543 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:30.543 "is_configured": true, 00:19:30.543 "data_offset": 256, 00:19:30.543 "data_size": 7936 00:19:30.543 } 00:19:30.543 ] 00:19:30.543 }' 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.543 10:13:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.109 10:13:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.109 10:13:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.109 10:13:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.109 [2024-11-19 10:13:45.075453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.109 [2024-11-19 10:13:45.075749] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:31.109 [2024-11-19 10:13:45.075803] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:31.109 [2024-11-19 10:13:45.075866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.109 [2024-11-19 10:13:45.088907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:31.109 10:13:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.109 10:13:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:31.109 [2024-11-19 10:13:45.091656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.045 "name": "raid_bdev1", 00:19:32.045 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:32.045 "strip_size_kb": 0, 00:19:32.045 "state": "online", 00:19:32.045 "raid_level": "raid1", 00:19:32.045 "superblock": true, 00:19:32.045 "num_base_bdevs": 2, 00:19:32.045 "num_base_bdevs_discovered": 2, 00:19:32.045 "num_base_bdevs_operational": 2, 00:19:32.045 "process": { 00:19:32.045 "type": "rebuild", 00:19:32.045 "target": "spare", 00:19:32.045 "progress": { 00:19:32.045 "blocks": 2560, 00:19:32.045 "percent": 32 00:19:32.045 } 00:19:32.045 }, 00:19:32.045 "base_bdevs_list": [ 00:19:32.045 { 00:19:32.045 "name": "spare", 00:19:32.045 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:32.045 "is_configured": true, 00:19:32.045 "data_offset": 256, 00:19:32.045 "data_size": 7936 00:19:32.045 }, 00:19:32.045 { 00:19:32.045 "name": "BaseBdev2", 00:19:32.045 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:32.045 "is_configured": true, 00:19:32.045 "data_offset": 256, 00:19:32.045 "data_size": 7936 00:19:32.045 } 00:19:32.045 ] 00:19:32.045 }' 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.045 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.045 [2024-11-19 10:13:46.249607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.303 [2024-11-19 10:13:46.303294] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:32.303 [2024-11-19 10:13:46.303445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.303 [2024-11-19 10:13:46.303472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.303 [2024-11-19 10:13:46.303506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.303 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.303 "name": "raid_bdev1", 00:19:32.303 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:32.303 "strip_size_kb": 0, 00:19:32.303 "state": "online", 00:19:32.303 "raid_level": "raid1", 00:19:32.303 "superblock": true, 00:19:32.303 "num_base_bdevs": 2, 00:19:32.303 "num_base_bdevs_discovered": 1, 00:19:32.303 "num_base_bdevs_operational": 1, 00:19:32.303 "base_bdevs_list": [ 00:19:32.303 { 00:19:32.303 "name": null, 00:19:32.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.303 "is_configured": false, 00:19:32.303 "data_offset": 0, 00:19:32.303 "data_size": 7936 00:19:32.303 }, 00:19:32.303 { 00:19:32.303 "name": "BaseBdev2", 00:19:32.303 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:32.303 "is_configured": true, 00:19:32.303 "data_offset": 256, 00:19:32.303 "data_size": 7936 00:19:32.304 } 00:19:32.304 ] 00:19:32.304 }' 00:19:32.304 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.304 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.871 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:32.871 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.871 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.871 [2024-11-19 10:13:46.863140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.871 [2024-11-19 10:13:46.863239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.871 [2024-11-19 10:13:46.863291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:32.871 [2024-11-19 10:13:46.863311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.871 [2024-11-19 10:13:46.863675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.871 [2024-11-19 10:13:46.863720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.871 [2024-11-19 10:13:46.863841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:32.871 [2024-11-19 10:13:46.863868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:32.871 [2024-11-19 10:13:46.863884] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:32.871 [2024-11-19 10:13:46.863919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.871 [2024-11-19 10:13:46.877361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:32.871 spare 00:19:32.871 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.871 10:13:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:32.871 [2024-11-19 10:13:46.880122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.806 "name": "raid_bdev1", 00:19:33.806 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:33.806 "strip_size_kb": 0, 00:19:33.806 "state": "online", 00:19:33.806 "raid_level": "raid1", 00:19:33.806 "superblock": true, 00:19:33.806 "num_base_bdevs": 2, 00:19:33.806 "num_base_bdevs_discovered": 2, 00:19:33.806 "num_base_bdevs_operational": 2, 00:19:33.806 "process": { 00:19:33.806 "type": "rebuild", 00:19:33.806 "target": "spare", 00:19:33.806 "progress": { 00:19:33.806 "blocks": 2560, 00:19:33.806 "percent": 32 00:19:33.806 } 00:19:33.806 }, 00:19:33.806 "base_bdevs_list": [ 00:19:33.806 { 00:19:33.806 "name": "spare", 00:19:33.806 "uuid": "8083e945-4a2e-5cdf-9743-511c90bc999e", 00:19:33.806 "is_configured": true, 00:19:33.806 "data_offset": 256, 00:19:33.806 "data_size": 7936 00:19:33.806 }, 00:19:33.806 { 00:19:33.806 "name": "BaseBdev2", 00:19:33.806 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:33.806 "is_configured": true, 00:19:33.806 "data_offset": 256, 00:19:33.806 "data_size": 7936 00:19:33.806 } 00:19:33.806 ] 00:19:33.806 }' 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.806 10:13:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.065 [2024-11-19 10:13:48.066008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.065 [2024-11-19 10:13:48.091661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:34.065 [2024-11-19 10:13:48.091794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.065 [2024-11-19 10:13:48.091827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.065 [2024-11-19 10:13:48.091841] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.065 "name": "raid_bdev1", 00:19:34.065 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:34.065 "strip_size_kb": 0, 00:19:34.065 "state": "online", 00:19:34.065 "raid_level": "raid1", 00:19:34.065 "superblock": true, 00:19:34.065 "num_base_bdevs": 2, 00:19:34.065 "num_base_bdevs_discovered": 1, 00:19:34.065 "num_base_bdevs_operational": 1, 00:19:34.065 "base_bdevs_list": [ 00:19:34.065 { 00:19:34.065 "name": null, 00:19:34.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.065 "is_configured": false, 00:19:34.065 "data_offset": 0, 00:19:34.065 "data_size": 7936 00:19:34.065 }, 00:19:34.065 { 00:19:34.065 "name": "BaseBdev2", 00:19:34.065 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:34.065 "is_configured": true, 00:19:34.065 "data_offset": 256, 00:19:34.065 "data_size": 7936 00:19:34.065 } 00:19:34.065 ] 00:19:34.065 }' 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.065 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.633 "name": "raid_bdev1", 00:19:34.633 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:34.633 "strip_size_kb": 0, 00:19:34.633 "state": "online", 00:19:34.633 "raid_level": "raid1", 00:19:34.633 "superblock": true, 00:19:34.633 "num_base_bdevs": 2, 00:19:34.633 "num_base_bdevs_discovered": 1, 00:19:34.633 "num_base_bdevs_operational": 1, 00:19:34.633 "base_bdevs_list": [ 00:19:34.633 { 00:19:34.633 "name": null, 00:19:34.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.633 "is_configured": false, 00:19:34.633 "data_offset": 0, 00:19:34.633 "data_size": 7936 00:19:34.633 }, 00:19:34.633 { 00:19:34.633 "name": "BaseBdev2", 00:19:34.633 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:34.633 "is_configured": true, 00:19:34.633 "data_offset": 256, 00:19:34.633 "data_size": 7936 00:19:34.633 } 00:19:34.633 ] 00:19:34.633 }' 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.633 [2024-11-19 10:13:48.779538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:34.633 [2024-11-19 10:13:48.779632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.633 [2024-11-19 10:13:48.779675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:34.633 [2024-11-19 10:13:48.779691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.633 [2024-11-19 10:13:48.780043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.633 [2024-11-19 10:13:48.780077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:34.633 [2024-11-19 10:13:48.780166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:34.633 [2024-11-19 10:13:48.780188] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:34.633 [2024-11-19 10:13:48.780207] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:34.633 [2024-11-19 10:13:48.780222] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:34.633 BaseBdev1 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.633 10:13:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.569 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.570 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.570 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.570 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.570 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.570 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.570 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.828 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.828 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.828 "name": "raid_bdev1", 00:19:35.828 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:35.828 "strip_size_kb": 0, 00:19:35.828 "state": "online", 00:19:35.828 "raid_level": "raid1", 00:19:35.828 "superblock": true, 00:19:35.828 "num_base_bdevs": 2, 00:19:35.828 "num_base_bdevs_discovered": 1, 00:19:35.828 "num_base_bdevs_operational": 1, 00:19:35.828 "base_bdevs_list": [ 00:19:35.828 { 00:19:35.828 "name": null, 00:19:35.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.828 "is_configured": false, 00:19:35.828 "data_offset": 0, 00:19:35.828 "data_size": 7936 00:19:35.828 }, 00:19:35.828 { 00:19:35.828 "name": "BaseBdev2", 00:19:35.828 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:35.828 "is_configured": true, 00:19:35.828 "data_offset": 256, 00:19:35.828 "data_size": 7936 00:19:35.828 } 00:19:35.828 ] 00:19:35.828 }' 00:19:35.828 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.828 10:13:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.086 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.344 "name": "raid_bdev1", 00:19:36.344 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:36.344 "strip_size_kb": 0, 00:19:36.344 "state": "online", 00:19:36.344 "raid_level": "raid1", 00:19:36.344 "superblock": true, 00:19:36.344 "num_base_bdevs": 2, 00:19:36.344 "num_base_bdevs_discovered": 1, 00:19:36.344 "num_base_bdevs_operational": 1, 00:19:36.344 "base_bdevs_list": [ 00:19:36.344 { 00:19:36.344 "name": null, 00:19:36.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.344 "is_configured": false, 00:19:36.344 "data_offset": 0, 00:19:36.344 "data_size": 7936 00:19:36.344 }, 00:19:36.344 { 00:19:36.344 "name": "BaseBdev2", 00:19:36.344 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:36.344 "is_configured": true, 00:19:36.344 "data_offset": 256, 00:19:36.344 "data_size": 7936 00:19:36.344 } 00:19:36.344 ] 00:19:36.344 }' 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.344 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.344 [2024-11-19 10:13:50.484063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.344 [2024-11-19 10:13:50.484315] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:36.344 [2024-11-19 10:13:50.484345] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:36.344 request: 00:19:36.344 { 00:19:36.344 "base_bdev": "BaseBdev1", 00:19:36.344 "raid_bdev": "raid_bdev1", 00:19:36.345 "method": "bdev_raid_add_base_bdev", 00:19:36.345 "req_id": 1 00:19:36.345 } 00:19:36.345 Got JSON-RPC error response 00:19:36.345 response: 00:19:36.345 { 00:19:36.345 "code": -22, 00:19:36.345 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:36.345 } 00:19:36.345 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:36.345 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:36.345 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.345 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.345 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.345 10:13:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.280 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.539 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.539 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.539 "name": "raid_bdev1", 00:19:37.539 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:37.539 "strip_size_kb": 0, 00:19:37.539 "state": "online", 00:19:37.539 "raid_level": "raid1", 00:19:37.539 "superblock": true, 00:19:37.539 "num_base_bdevs": 2, 00:19:37.539 "num_base_bdevs_discovered": 1, 00:19:37.539 "num_base_bdevs_operational": 1, 00:19:37.539 "base_bdevs_list": [ 00:19:37.539 { 00:19:37.539 "name": null, 00:19:37.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.539 "is_configured": false, 00:19:37.539 "data_offset": 0, 00:19:37.539 "data_size": 7936 00:19:37.539 }, 00:19:37.539 { 00:19:37.539 "name": "BaseBdev2", 00:19:37.539 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:37.539 "is_configured": true, 00:19:37.539 "data_offset": 256, 00:19:37.539 "data_size": 7936 00:19:37.539 } 00:19:37.539 ] 00:19:37.539 }' 00:19:37.539 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.539 10:13:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.797 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.797 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.797 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:37.798 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:37.798 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.056 "name": "raid_bdev1", 00:19:38.056 "uuid": "41fe739f-c90f-4aac-9195-dd9428e2338d", 00:19:38.056 "strip_size_kb": 0, 00:19:38.056 "state": "online", 00:19:38.056 "raid_level": "raid1", 00:19:38.056 "superblock": true, 00:19:38.056 "num_base_bdevs": 2, 00:19:38.056 "num_base_bdevs_discovered": 1, 00:19:38.056 "num_base_bdevs_operational": 1, 00:19:38.056 "base_bdevs_list": [ 00:19:38.056 { 00:19:38.056 "name": null, 00:19:38.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.056 "is_configured": false, 00:19:38.056 "data_offset": 0, 00:19:38.056 "data_size": 7936 00:19:38.056 }, 00:19:38.056 { 00:19:38.056 "name": "BaseBdev2", 00:19:38.056 "uuid": "e5431c81-d4de-570f-bff0-1bfc5b74970f", 00:19:38.056 "is_configured": true, 00:19:38.056 "data_offset": 256, 00:19:38.056 "data_size": 7936 00:19:38.056 } 00:19:38.056 ] 00:19:38.056 }' 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.056 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88231 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88231 ']' 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88231 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88231 00:19:38.057 killing process with pid 88231 00:19:38.057 Received shutdown signal, test time was about 60.000000 seconds 00:19:38.057 00:19:38.057 Latency(us) 00:19:38.057 [2024-11-19T10:13:52.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.057 [2024-11-19T10:13:52.289Z] =================================================================================================================== 00:19:38.057 [2024-11-19T10:13:52.289Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88231' 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88231 00:19:38.057 10:13:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88231 00:19:38.057 [2024-11-19 10:13:52.215491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.057 [2024-11-19 10:13:52.215698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.057 [2024-11-19 10:13:52.215795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.057 [2024-11-19 10:13:52.215819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:38.315 [2024-11-19 10:13:52.533330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.691 ************************************ 00:19:39.691 END TEST raid_rebuild_test_sb_md_separate 00:19:39.691 ************************************ 00:19:39.691 10:13:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:39.691 00:19:39.691 real 0m21.895s 00:19:39.691 user 0m29.696s 00:19:39.691 sys 0m2.601s 00:19:39.691 10:13:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.691 10:13:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.691 10:13:53 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:39.691 10:13:53 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:39.691 10:13:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:39.691 10:13:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.691 10:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.691 ************************************ 00:19:39.691 START TEST raid_state_function_test_sb_md_interleaved 00:19:39.691 ************************************ 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:39.691 Process raid pid: 88930 00:19:39.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88930 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88930' 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88930 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88930 ']' 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.691 10:13:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.691 [2024-11-19 10:13:53.827062] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:39.691 [2024-11-19 10:13:53.827535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.949 [2024-11-19 10:13:54.018470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.207 [2024-11-19 10:13:54.183015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.207 [2024-11-19 10:13:54.409280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.207 [2024-11-19 10:13:54.409344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.774 [2024-11-19 10:13:54.852863] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:40.774 [2024-11-19 10:13:54.852934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:40.774 [2024-11-19 10:13:54.852952] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:40.774 [2024-11-19 10:13:54.852969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.774 "name": "Existed_Raid", 00:19:40.774 "uuid": "d9ebc0a9-6704-47b7-98a8-dad7941ae038", 00:19:40.774 "strip_size_kb": 0, 00:19:40.774 "state": "configuring", 00:19:40.774 "raid_level": "raid1", 00:19:40.774 "superblock": true, 00:19:40.774 "num_base_bdevs": 2, 00:19:40.774 "num_base_bdevs_discovered": 0, 00:19:40.774 "num_base_bdevs_operational": 2, 00:19:40.774 "base_bdevs_list": [ 00:19:40.774 { 00:19:40.774 "name": "BaseBdev1", 00:19:40.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.774 "is_configured": false, 00:19:40.774 "data_offset": 0, 00:19:40.774 "data_size": 0 00:19:40.774 }, 00:19:40.774 { 00:19:40.774 "name": "BaseBdev2", 00:19:40.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.774 "is_configured": false, 00:19:40.774 "data_offset": 0, 00:19:40.774 "data_size": 0 00:19:40.774 } 00:19:40.774 ] 00:19:40.774 }' 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.774 10:13:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 [2024-11-19 10:13:55.368929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.342 [2024-11-19 10:13:55.368978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 [2024-11-19 10:13:55.380951] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.342 [2024-11-19 10:13:55.381017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.342 [2024-11-19 10:13:55.381034] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.342 [2024-11-19 10:13:55.381053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 [2024-11-19 10:13:55.429468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.342 BaseBdev1 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 [ 00:19:41.342 { 00:19:41.342 "name": "BaseBdev1", 00:19:41.342 "aliases": [ 00:19:41.342 "07afd365-9fe6-4ef0-9584-044486e8752a" 00:19:41.342 ], 00:19:41.342 "product_name": "Malloc disk", 00:19:41.342 "block_size": 4128, 00:19:41.342 "num_blocks": 8192, 00:19:41.342 "uuid": "07afd365-9fe6-4ef0-9584-044486e8752a", 00:19:41.342 "md_size": 32, 00:19:41.342 "md_interleave": true, 00:19:41.342 "dif_type": 0, 00:19:41.342 "assigned_rate_limits": { 00:19:41.342 "rw_ios_per_sec": 0, 00:19:41.342 "rw_mbytes_per_sec": 0, 00:19:41.342 "r_mbytes_per_sec": 0, 00:19:41.342 "w_mbytes_per_sec": 0 00:19:41.342 }, 00:19:41.342 "claimed": true, 00:19:41.342 "claim_type": "exclusive_write", 00:19:41.342 "zoned": false, 00:19:41.342 "supported_io_types": { 00:19:41.342 "read": true, 00:19:41.342 "write": true, 00:19:41.342 "unmap": true, 00:19:41.342 "flush": true, 00:19:41.342 "reset": true, 00:19:41.342 "nvme_admin": false, 00:19:41.342 "nvme_io": false, 00:19:41.342 "nvme_io_md": false, 00:19:41.342 "write_zeroes": true, 00:19:41.342 "zcopy": true, 00:19:41.342 "get_zone_info": false, 00:19:41.342 "zone_management": false, 00:19:41.342 "zone_append": false, 00:19:41.342 "compare": false, 00:19:41.342 "compare_and_write": false, 00:19:41.342 "abort": true, 00:19:41.342 "seek_hole": false, 00:19:41.342 "seek_data": false, 00:19:41.342 "copy": true, 00:19:41.342 "nvme_iov_md": false 00:19:41.342 }, 00:19:41.342 "memory_domains": [ 00:19:41.342 { 00:19:41.342 "dma_device_id": "system", 00:19:41.342 "dma_device_type": 1 00:19:41.342 }, 00:19:41.342 { 00:19:41.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.342 "dma_device_type": 2 00:19:41.342 } 00:19:41.342 ], 00:19:41.342 "driver_specific": {} 00:19:41.342 } 00:19:41.342 ] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.342 "name": "Existed_Raid", 00:19:41.342 "uuid": "83f18cf3-39fb-4bb0-883b-22b4584873ac", 00:19:41.342 "strip_size_kb": 0, 00:19:41.342 "state": "configuring", 00:19:41.342 "raid_level": "raid1", 00:19:41.342 "superblock": true, 00:19:41.342 "num_base_bdevs": 2, 00:19:41.342 "num_base_bdevs_discovered": 1, 00:19:41.342 "num_base_bdevs_operational": 2, 00:19:41.342 "base_bdevs_list": [ 00:19:41.342 { 00:19:41.342 "name": "BaseBdev1", 00:19:41.342 "uuid": "07afd365-9fe6-4ef0-9584-044486e8752a", 00:19:41.342 "is_configured": true, 00:19:41.342 "data_offset": 256, 00:19:41.342 "data_size": 7936 00:19:41.342 }, 00:19:41.342 { 00:19:41.342 "name": "BaseBdev2", 00:19:41.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.342 "is_configured": false, 00:19:41.342 "data_offset": 0, 00:19:41.342 "data_size": 0 00:19:41.342 } 00:19:41.342 ] 00:19:41.342 }' 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.342 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.909 [2024-11-19 10:13:55.973713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.909 [2024-11-19 10:13:55.973798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.909 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.909 [2024-11-19 10:13:55.981791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.909 [2024-11-19 10:13:55.984518] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.910 [2024-11-19 10:13:55.984578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.910 10:13:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.910 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.910 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.910 "name": "Existed_Raid", 00:19:41.910 "uuid": "b72f1849-baca-4a65-91c6-f1dc20f44b55", 00:19:41.910 "strip_size_kb": 0, 00:19:41.910 "state": "configuring", 00:19:41.910 "raid_level": "raid1", 00:19:41.910 "superblock": true, 00:19:41.910 "num_base_bdevs": 2, 00:19:41.910 "num_base_bdevs_discovered": 1, 00:19:41.910 "num_base_bdevs_operational": 2, 00:19:41.910 "base_bdevs_list": [ 00:19:41.910 { 00:19:41.910 "name": "BaseBdev1", 00:19:41.910 "uuid": "07afd365-9fe6-4ef0-9584-044486e8752a", 00:19:41.910 "is_configured": true, 00:19:41.910 "data_offset": 256, 00:19:41.910 "data_size": 7936 00:19:41.910 }, 00:19:41.910 { 00:19:41.910 "name": "BaseBdev2", 00:19:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.910 "is_configured": false, 00:19:41.910 "data_offset": 0, 00:19:41.910 "data_size": 0 00:19:41.910 } 00:19:41.910 ] 00:19:41.910 }' 00:19:41.910 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.910 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 [2024-11-19 10:13:56.524768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.477 [2024-11-19 10:13:56.525137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:42.477 [2024-11-19 10:13:56.525163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:42.477 [2024-11-19 10:13:56.525309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:42.477 BaseBdev2 00:19:42.477 [2024-11-19 10:13:56.525458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:42.477 [2024-11-19 10:13:56.525489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:42.477 [2024-11-19 10:13:56.525623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 [ 00:19:42.477 { 00:19:42.477 "name": "BaseBdev2", 00:19:42.477 "aliases": [ 00:19:42.477 "25d3f40e-4b32-4cd0-a45a-884f65ab7315" 00:19:42.477 ], 00:19:42.477 "product_name": "Malloc disk", 00:19:42.477 "block_size": 4128, 00:19:42.477 "num_blocks": 8192, 00:19:42.477 "uuid": "25d3f40e-4b32-4cd0-a45a-884f65ab7315", 00:19:42.477 "md_size": 32, 00:19:42.477 "md_interleave": true, 00:19:42.477 "dif_type": 0, 00:19:42.477 "assigned_rate_limits": { 00:19:42.477 "rw_ios_per_sec": 0, 00:19:42.477 "rw_mbytes_per_sec": 0, 00:19:42.477 "r_mbytes_per_sec": 0, 00:19:42.477 "w_mbytes_per_sec": 0 00:19:42.477 }, 00:19:42.477 "claimed": true, 00:19:42.477 "claim_type": "exclusive_write", 00:19:42.477 "zoned": false, 00:19:42.477 "supported_io_types": { 00:19:42.477 "read": true, 00:19:42.477 "write": true, 00:19:42.477 "unmap": true, 00:19:42.477 "flush": true, 00:19:42.477 "reset": true, 00:19:42.477 "nvme_admin": false, 00:19:42.477 "nvme_io": false, 00:19:42.477 "nvme_io_md": false, 00:19:42.477 "write_zeroes": true, 00:19:42.477 "zcopy": true, 00:19:42.477 "get_zone_info": false, 00:19:42.477 "zone_management": false, 00:19:42.477 "zone_append": false, 00:19:42.477 "compare": false, 00:19:42.477 "compare_and_write": false, 00:19:42.477 "abort": true, 00:19:42.477 "seek_hole": false, 00:19:42.477 "seek_data": false, 00:19:42.477 "copy": true, 00:19:42.477 "nvme_iov_md": false 00:19:42.477 }, 00:19:42.477 "memory_domains": [ 00:19:42.477 { 00:19:42.477 "dma_device_id": "system", 00:19:42.477 "dma_device_type": 1 00:19:42.477 }, 00:19:42.477 { 00:19:42.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.477 "dma_device_type": 2 00:19:42.477 } 00:19:42.477 ], 00:19:42.477 "driver_specific": {} 00:19:42.477 } 00:19:42.477 ] 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.477 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.477 "name": "Existed_Raid", 00:19:42.477 "uuid": "b72f1849-baca-4a65-91c6-f1dc20f44b55", 00:19:42.477 "strip_size_kb": 0, 00:19:42.477 "state": "online", 00:19:42.477 "raid_level": "raid1", 00:19:42.477 "superblock": true, 00:19:42.477 "num_base_bdevs": 2, 00:19:42.477 "num_base_bdevs_discovered": 2, 00:19:42.477 "num_base_bdevs_operational": 2, 00:19:42.477 "base_bdevs_list": [ 00:19:42.477 { 00:19:42.477 "name": "BaseBdev1", 00:19:42.477 "uuid": "07afd365-9fe6-4ef0-9584-044486e8752a", 00:19:42.477 "is_configured": true, 00:19:42.477 "data_offset": 256, 00:19:42.477 "data_size": 7936 00:19:42.477 }, 00:19:42.477 { 00:19:42.477 "name": "BaseBdev2", 00:19:42.477 "uuid": "25d3f40e-4b32-4cd0-a45a-884f65ab7315", 00:19:42.477 "is_configured": true, 00:19:42.477 "data_offset": 256, 00:19:42.477 "data_size": 7936 00:19:42.478 } 00:19:42.478 ] 00:19:42.478 }' 00:19:42.478 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.478 10:13:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.046 [2024-11-19 10:13:57.121561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:43.046 "name": "Existed_Raid", 00:19:43.046 "aliases": [ 00:19:43.046 "b72f1849-baca-4a65-91c6-f1dc20f44b55" 00:19:43.046 ], 00:19:43.046 "product_name": "Raid Volume", 00:19:43.046 "block_size": 4128, 00:19:43.046 "num_blocks": 7936, 00:19:43.046 "uuid": "b72f1849-baca-4a65-91c6-f1dc20f44b55", 00:19:43.046 "md_size": 32, 00:19:43.046 "md_interleave": true, 00:19:43.046 "dif_type": 0, 00:19:43.046 "assigned_rate_limits": { 00:19:43.046 "rw_ios_per_sec": 0, 00:19:43.046 "rw_mbytes_per_sec": 0, 00:19:43.046 "r_mbytes_per_sec": 0, 00:19:43.046 "w_mbytes_per_sec": 0 00:19:43.046 }, 00:19:43.046 "claimed": false, 00:19:43.046 "zoned": false, 00:19:43.046 "supported_io_types": { 00:19:43.046 "read": true, 00:19:43.046 "write": true, 00:19:43.046 "unmap": false, 00:19:43.046 "flush": false, 00:19:43.046 "reset": true, 00:19:43.046 "nvme_admin": false, 00:19:43.046 "nvme_io": false, 00:19:43.046 "nvme_io_md": false, 00:19:43.046 "write_zeroes": true, 00:19:43.046 "zcopy": false, 00:19:43.046 "get_zone_info": false, 00:19:43.046 "zone_management": false, 00:19:43.046 "zone_append": false, 00:19:43.046 "compare": false, 00:19:43.046 "compare_and_write": false, 00:19:43.046 "abort": false, 00:19:43.046 "seek_hole": false, 00:19:43.046 "seek_data": false, 00:19:43.046 "copy": false, 00:19:43.046 "nvme_iov_md": false 00:19:43.046 }, 00:19:43.046 "memory_domains": [ 00:19:43.046 { 00:19:43.046 "dma_device_id": "system", 00:19:43.046 "dma_device_type": 1 00:19:43.046 }, 00:19:43.046 { 00:19:43.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.046 "dma_device_type": 2 00:19:43.046 }, 00:19:43.046 { 00:19:43.046 "dma_device_id": "system", 00:19:43.046 "dma_device_type": 1 00:19:43.046 }, 00:19:43.046 { 00:19:43.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.046 "dma_device_type": 2 00:19:43.046 } 00:19:43.046 ], 00:19:43.046 "driver_specific": { 00:19:43.046 "raid": { 00:19:43.046 "uuid": "b72f1849-baca-4a65-91c6-f1dc20f44b55", 00:19:43.046 "strip_size_kb": 0, 00:19:43.046 "state": "online", 00:19:43.046 "raid_level": "raid1", 00:19:43.046 "superblock": true, 00:19:43.046 "num_base_bdevs": 2, 00:19:43.046 "num_base_bdevs_discovered": 2, 00:19:43.046 "num_base_bdevs_operational": 2, 00:19:43.046 "base_bdevs_list": [ 00:19:43.046 { 00:19:43.046 "name": "BaseBdev1", 00:19:43.046 "uuid": "07afd365-9fe6-4ef0-9584-044486e8752a", 00:19:43.046 "is_configured": true, 00:19:43.046 "data_offset": 256, 00:19:43.046 "data_size": 7936 00:19:43.046 }, 00:19:43.046 { 00:19:43.046 "name": "BaseBdev2", 00:19:43.046 "uuid": "25d3f40e-4b32-4cd0-a45a-884f65ab7315", 00:19:43.046 "is_configured": true, 00:19:43.046 "data_offset": 256, 00:19:43.046 "data_size": 7936 00:19:43.046 } 00:19:43.046 ] 00:19:43.046 } 00:19:43.046 } 00:19:43.046 }' 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:43.046 BaseBdev2' 00:19:43.046 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.305 [2024-11-19 10:13:57.393225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.305 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.563 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.563 "name": "Existed_Raid", 00:19:43.563 "uuid": "b72f1849-baca-4a65-91c6-f1dc20f44b55", 00:19:43.563 "strip_size_kb": 0, 00:19:43.563 "state": "online", 00:19:43.563 "raid_level": "raid1", 00:19:43.563 "superblock": true, 00:19:43.563 "num_base_bdevs": 2, 00:19:43.563 "num_base_bdevs_discovered": 1, 00:19:43.563 "num_base_bdevs_operational": 1, 00:19:43.563 "base_bdevs_list": [ 00:19:43.563 { 00:19:43.563 "name": null, 00:19:43.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.563 "is_configured": false, 00:19:43.563 "data_offset": 0, 00:19:43.563 "data_size": 7936 00:19:43.563 }, 00:19:43.563 { 00:19:43.563 "name": "BaseBdev2", 00:19:43.563 "uuid": "25d3f40e-4b32-4cd0-a45a-884f65ab7315", 00:19:43.563 "is_configured": true, 00:19:43.563 "data_offset": 256, 00:19:43.563 "data_size": 7936 00:19:43.563 } 00:19:43.563 ] 00:19:43.563 }' 00:19:43.563 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.563 10:13:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.820 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:43.820 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:43.820 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.820 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.820 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.820 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.079 [2024-11-19 10:13:58.096203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:44.079 [2024-11-19 10:13:58.096361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.079 [2024-11-19 10:13:58.190821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.079 [2024-11-19 10:13:58.191129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.079 [2024-11-19 10:13:58.191168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88930 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88930 ']' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88930 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88930 00:19:44.079 killing process with pid 88930 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88930' 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88930 00:19:44.079 [2024-11-19 10:13:58.275082] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.079 10:13:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88930 00:19:44.079 [2024-11-19 10:13:58.290839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:45.457 ************************************ 00:19:45.457 END TEST raid_state_function_test_sb_md_interleaved 00:19:45.457 ************************************ 00:19:45.457 10:13:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:45.457 00:19:45.457 real 0m5.699s 00:19:45.457 user 0m8.503s 00:19:45.457 sys 0m0.899s 00:19:45.457 10:13:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.457 10:13:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.457 10:13:59 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:45.457 10:13:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:45.457 10:13:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.457 10:13:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.457 ************************************ 00:19:45.457 START TEST raid_superblock_test_md_interleaved 00:19:45.457 ************************************ 00:19:45.457 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:45.457 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:45.457 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:45.457 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:45.457 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:45.457 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89188 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89188 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89188 ']' 00:19:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.458 10:13:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.458 [2024-11-19 10:13:59.572696] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:45.458 [2024-11-19 10:13:59.573159] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89188 ] 00:19:45.729 [2024-11-19 10:13:59.768767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.729 [2024-11-19 10:13:59.947484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.987 [2024-11-19 10:14:00.186294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.987 [2024-11-19 10:14:00.186600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.554 malloc1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.554 [2024-11-19 10:14:00.686966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:46.554 [2024-11-19 10:14:00.687053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.554 [2024-11-19 10:14:00.687092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:46.554 [2024-11-19 10:14:00.687120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.554 [2024-11-19 10:14:00.689840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.554 [2024-11-19 10:14:00.690014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:46.554 pt1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.554 malloc2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.554 [2024-11-19 10:14:00.751256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:46.554 [2024-11-19 10:14:00.751348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.554 [2024-11-19 10:14:00.751387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:46.554 [2024-11-19 10:14:00.751404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.554 [2024-11-19 10:14:00.754384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.554 [2024-11-19 10:14:00.754446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:46.554 pt2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.554 [2024-11-19 10:14:00.759385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:46.554 [2024-11-19 10:14:00.762944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:46.554 [2024-11-19 10:14:00.763339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:46.554 [2024-11-19 10:14:00.763366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:46.554 [2024-11-19 10:14:00.763544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:46.554 [2024-11-19 10:14:00.763715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:46.554 [2024-11-19 10:14:00.763747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:46.554 [2024-11-19 10:14:00.763997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.554 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.555 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.813 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.813 "name": "raid_bdev1", 00:19:46.813 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:46.813 "strip_size_kb": 0, 00:19:46.813 "state": "online", 00:19:46.813 "raid_level": "raid1", 00:19:46.813 "superblock": true, 00:19:46.813 "num_base_bdevs": 2, 00:19:46.813 "num_base_bdevs_discovered": 2, 00:19:46.813 "num_base_bdevs_operational": 2, 00:19:46.813 "base_bdevs_list": [ 00:19:46.813 { 00:19:46.813 "name": "pt1", 00:19:46.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:46.813 "is_configured": true, 00:19:46.813 "data_offset": 256, 00:19:46.813 "data_size": 7936 00:19:46.813 }, 00:19:46.813 { 00:19:46.813 "name": "pt2", 00:19:46.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:46.813 "is_configured": true, 00:19:46.813 "data_offset": 256, 00:19:46.813 "data_size": 7936 00:19:46.813 } 00:19:46.813 ] 00:19:46.813 }' 00:19:46.813 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.813 10:14:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.072 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:47.073 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:47.073 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:47.073 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:47.073 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:47.073 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:47.073 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:47.332 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.332 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.332 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.332 [2024-11-19 10:14:01.308079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.332 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.332 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.332 "name": "raid_bdev1", 00:19:47.332 "aliases": [ 00:19:47.332 "1866b1ba-0685-4ce0-a9f1-879a337f4f17" 00:19:47.332 ], 00:19:47.332 "product_name": "Raid Volume", 00:19:47.332 "block_size": 4128, 00:19:47.332 "num_blocks": 7936, 00:19:47.332 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:47.332 "md_size": 32, 00:19:47.332 "md_interleave": true, 00:19:47.332 "dif_type": 0, 00:19:47.332 "assigned_rate_limits": { 00:19:47.332 "rw_ios_per_sec": 0, 00:19:47.332 "rw_mbytes_per_sec": 0, 00:19:47.332 "r_mbytes_per_sec": 0, 00:19:47.332 "w_mbytes_per_sec": 0 00:19:47.332 }, 00:19:47.332 "claimed": false, 00:19:47.332 "zoned": false, 00:19:47.332 "supported_io_types": { 00:19:47.332 "read": true, 00:19:47.332 "write": true, 00:19:47.332 "unmap": false, 00:19:47.332 "flush": false, 00:19:47.332 "reset": true, 00:19:47.332 "nvme_admin": false, 00:19:47.332 "nvme_io": false, 00:19:47.332 "nvme_io_md": false, 00:19:47.332 "write_zeroes": true, 00:19:47.332 "zcopy": false, 00:19:47.332 "get_zone_info": false, 00:19:47.332 "zone_management": false, 00:19:47.332 "zone_append": false, 00:19:47.332 "compare": false, 00:19:47.332 "compare_and_write": false, 00:19:47.332 "abort": false, 00:19:47.332 "seek_hole": false, 00:19:47.332 "seek_data": false, 00:19:47.332 "copy": false, 00:19:47.332 "nvme_iov_md": false 00:19:47.332 }, 00:19:47.332 "memory_domains": [ 00:19:47.332 { 00:19:47.332 "dma_device_id": "system", 00:19:47.332 "dma_device_type": 1 00:19:47.332 }, 00:19:47.332 { 00:19:47.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.332 "dma_device_type": 2 00:19:47.332 }, 00:19:47.332 { 00:19:47.332 "dma_device_id": "system", 00:19:47.332 "dma_device_type": 1 00:19:47.332 }, 00:19:47.332 { 00:19:47.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.332 "dma_device_type": 2 00:19:47.332 } 00:19:47.332 ], 00:19:47.332 "driver_specific": { 00:19:47.332 "raid": { 00:19:47.332 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:47.332 "strip_size_kb": 0, 00:19:47.332 "state": "online", 00:19:47.332 "raid_level": "raid1", 00:19:47.332 "superblock": true, 00:19:47.332 "num_base_bdevs": 2, 00:19:47.332 "num_base_bdevs_discovered": 2, 00:19:47.332 "num_base_bdevs_operational": 2, 00:19:47.332 "base_bdevs_list": [ 00:19:47.332 { 00:19:47.332 "name": "pt1", 00:19:47.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.332 "is_configured": true, 00:19:47.332 "data_offset": 256, 00:19:47.332 "data_size": 7936 00:19:47.332 }, 00:19:47.332 { 00:19:47.332 "name": "pt2", 00:19:47.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.332 "is_configured": true, 00:19:47.332 "data_offset": 256, 00:19:47.333 "data_size": 7936 00:19:47.333 } 00:19:47.333 ] 00:19:47.333 } 00:19:47.333 } 00:19:47.333 }' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:47.333 pt2' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.333 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.333 [2024-11-19 10:14:01.556059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.591 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.591 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1866b1ba-0685-4ce0-a9f1-879a337f4f17 00:19:47.591 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 1866b1ba-0685-4ce0-a9f1-879a337f4f17 ']' 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 [2024-11-19 10:14:01.607685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.592 [2024-11-19 10:14:01.607850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.592 [2024-11-19 10:14:01.608095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.592 [2024-11-19 10:14:01.608278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.592 [2024-11-19 10:14:01.608429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 [2024-11-19 10:14:01.771808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:47.592 [2024-11-19 10:14:01.774590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:47.592 [2024-11-19 10:14:01.774707] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:47.592 [2024-11-19 10:14:01.774818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:47.592 [2024-11-19 10:14:01.774850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.592 [2024-11-19 10:14:01.774867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:47.592 request: 00:19:47.592 { 00:19:47.592 "name": "raid_bdev1", 00:19:47.592 "raid_level": "raid1", 00:19:47.592 "base_bdevs": [ 00:19:47.592 "malloc1", 00:19:47.592 "malloc2" 00:19:47.592 ], 00:19:47.592 "superblock": false, 00:19:47.592 "method": "bdev_raid_create", 00:19:47.592 "req_id": 1 00:19:47.592 } 00:19:47.592 Got JSON-RPC error response 00:19:47.592 response: 00:19:47.592 { 00:19:47.592 "code": -17, 00:19:47.592 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:47.592 } 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.592 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.850 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:47.850 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:47.850 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:47.850 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.851 [2024-11-19 10:14:01.859807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:47.851 [2024-11-19 10:14:01.860048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.851 [2024-11-19 10:14:01.860127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:47.851 [2024-11-19 10:14:01.860362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.851 [2024-11-19 10:14:01.863245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.851 [2024-11-19 10:14:01.863409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:47.851 [2024-11-19 10:14:01.863611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:47.851 [2024-11-19 10:14:01.863831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:47.851 pt1 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.851 "name": "raid_bdev1", 00:19:47.851 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:47.851 "strip_size_kb": 0, 00:19:47.851 "state": "configuring", 00:19:47.851 "raid_level": "raid1", 00:19:47.851 "superblock": true, 00:19:47.851 "num_base_bdevs": 2, 00:19:47.851 "num_base_bdevs_discovered": 1, 00:19:47.851 "num_base_bdevs_operational": 2, 00:19:47.851 "base_bdevs_list": [ 00:19:47.851 { 00:19:47.851 "name": "pt1", 00:19:47.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.851 "is_configured": true, 00:19:47.851 "data_offset": 256, 00:19:47.851 "data_size": 7936 00:19:47.851 }, 00:19:47.851 { 00:19:47.851 "name": null, 00:19:47.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.851 "is_configured": false, 00:19:47.851 "data_offset": 256, 00:19:47.851 "data_size": 7936 00:19:47.851 } 00:19:47.851 ] 00:19:47.851 }' 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.851 10:14:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:48.417 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:48.417 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:48.417 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.417 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.417 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 [2024-11-19 10:14:02.460525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.417 [2024-11-19 10:14:02.460649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.417 [2024-11-19 10:14:02.460687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:48.417 [2024-11-19 10:14:02.460707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.418 [2024-11-19 10:14:02.460996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.418 [2024-11-19 10:14:02.461025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.418 [2024-11-19 10:14:02.461106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:48.418 [2024-11-19 10:14:02.461157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.418 [2024-11-19 10:14:02.461298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:48.418 [2024-11-19 10:14:02.461320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:48.418 [2024-11-19 10:14:02.461422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:48.418 [2024-11-19 10:14:02.461532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:48.418 [2024-11-19 10:14:02.461548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:48.418 [2024-11-19 10:14:02.461647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.418 pt2 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.418 "name": "raid_bdev1", 00:19:48.418 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:48.418 "strip_size_kb": 0, 00:19:48.418 "state": "online", 00:19:48.418 "raid_level": "raid1", 00:19:48.418 "superblock": true, 00:19:48.418 "num_base_bdevs": 2, 00:19:48.418 "num_base_bdevs_discovered": 2, 00:19:48.418 "num_base_bdevs_operational": 2, 00:19:48.418 "base_bdevs_list": [ 00:19:48.418 { 00:19:48.418 "name": "pt1", 00:19:48.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.418 "is_configured": true, 00:19:48.418 "data_offset": 256, 00:19:48.418 "data_size": 7936 00:19:48.418 }, 00:19:48.418 { 00:19:48.418 "name": "pt2", 00:19:48.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.418 "is_configured": true, 00:19:48.418 "data_offset": 256, 00:19:48.418 "data_size": 7936 00:19:48.418 } 00:19:48.418 ] 00:19:48.418 }' 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.418 10:14:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:48.984 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.985 [2024-11-19 10:14:03.013032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:48.985 "name": "raid_bdev1", 00:19:48.985 "aliases": [ 00:19:48.985 "1866b1ba-0685-4ce0-a9f1-879a337f4f17" 00:19:48.985 ], 00:19:48.985 "product_name": "Raid Volume", 00:19:48.985 "block_size": 4128, 00:19:48.985 "num_blocks": 7936, 00:19:48.985 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:48.985 "md_size": 32, 00:19:48.985 "md_interleave": true, 00:19:48.985 "dif_type": 0, 00:19:48.985 "assigned_rate_limits": { 00:19:48.985 "rw_ios_per_sec": 0, 00:19:48.985 "rw_mbytes_per_sec": 0, 00:19:48.985 "r_mbytes_per_sec": 0, 00:19:48.985 "w_mbytes_per_sec": 0 00:19:48.985 }, 00:19:48.985 "claimed": false, 00:19:48.985 "zoned": false, 00:19:48.985 "supported_io_types": { 00:19:48.985 "read": true, 00:19:48.985 "write": true, 00:19:48.985 "unmap": false, 00:19:48.985 "flush": false, 00:19:48.985 "reset": true, 00:19:48.985 "nvme_admin": false, 00:19:48.985 "nvme_io": false, 00:19:48.985 "nvme_io_md": false, 00:19:48.985 "write_zeroes": true, 00:19:48.985 "zcopy": false, 00:19:48.985 "get_zone_info": false, 00:19:48.985 "zone_management": false, 00:19:48.985 "zone_append": false, 00:19:48.985 "compare": false, 00:19:48.985 "compare_and_write": false, 00:19:48.985 "abort": false, 00:19:48.985 "seek_hole": false, 00:19:48.985 "seek_data": false, 00:19:48.985 "copy": false, 00:19:48.985 "nvme_iov_md": false 00:19:48.985 }, 00:19:48.985 "memory_domains": [ 00:19:48.985 { 00:19:48.985 "dma_device_id": "system", 00:19:48.985 "dma_device_type": 1 00:19:48.985 }, 00:19:48.985 { 00:19:48.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.985 "dma_device_type": 2 00:19:48.985 }, 00:19:48.985 { 00:19:48.985 "dma_device_id": "system", 00:19:48.985 "dma_device_type": 1 00:19:48.985 }, 00:19:48.985 { 00:19:48.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.985 "dma_device_type": 2 00:19:48.985 } 00:19:48.985 ], 00:19:48.985 "driver_specific": { 00:19:48.985 "raid": { 00:19:48.985 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:48.985 "strip_size_kb": 0, 00:19:48.985 "state": "online", 00:19:48.985 "raid_level": "raid1", 00:19:48.985 "superblock": true, 00:19:48.985 "num_base_bdevs": 2, 00:19:48.985 "num_base_bdevs_discovered": 2, 00:19:48.985 "num_base_bdevs_operational": 2, 00:19:48.985 "base_bdevs_list": [ 00:19:48.985 { 00:19:48.985 "name": "pt1", 00:19:48.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.985 "is_configured": true, 00:19:48.985 "data_offset": 256, 00:19:48.985 "data_size": 7936 00:19:48.985 }, 00:19:48.985 { 00:19:48.985 "name": "pt2", 00:19:48.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.985 "is_configured": true, 00:19:48.985 "data_offset": 256, 00:19:48.985 "data_size": 7936 00:19:48.985 } 00:19:48.985 ] 00:19:48.985 } 00:19:48.985 } 00:19:48.985 }' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:48.985 pt2' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.985 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:49.243 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.244 [2024-11-19 10:14:03.281146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 1866b1ba-0685-4ce0-a9f1-879a337f4f17 '!=' 1866b1ba-0685-4ce0-a9f1-879a337f4f17 ']' 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.244 [2024-11-19 10:14:03.328890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.244 "name": "raid_bdev1", 00:19:49.244 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:49.244 "strip_size_kb": 0, 00:19:49.244 "state": "online", 00:19:49.244 "raid_level": "raid1", 00:19:49.244 "superblock": true, 00:19:49.244 "num_base_bdevs": 2, 00:19:49.244 "num_base_bdevs_discovered": 1, 00:19:49.244 "num_base_bdevs_operational": 1, 00:19:49.244 "base_bdevs_list": [ 00:19:49.244 { 00:19:49.244 "name": null, 00:19:49.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.244 "is_configured": false, 00:19:49.244 "data_offset": 0, 00:19:49.244 "data_size": 7936 00:19:49.244 }, 00:19:49.244 { 00:19:49.244 "name": "pt2", 00:19:49.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.244 "is_configured": true, 00:19:49.244 "data_offset": 256, 00:19:49.244 "data_size": 7936 00:19:49.244 } 00:19:49.244 ] 00:19:49.244 }' 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.244 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 [2024-11-19 10:14:03.840941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.860 [2024-11-19 10:14:03.840992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.860 [2024-11-19 10:14:03.841117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.860 [2024-11-19 10:14:03.841196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.860 [2024-11-19 10:14:03.841218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.860 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.861 [2024-11-19 10:14:03.912935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:49.861 [2024-11-19 10:14:03.913033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.861 [2024-11-19 10:14:03.913064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:49.861 [2024-11-19 10:14:03.913084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.861 [2024-11-19 10:14:03.915909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.861 [2024-11-19 10:14:03.916107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:49.861 [2024-11-19 10:14:03.916213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:49.861 [2024-11-19 10:14:03.916291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:49.861 [2024-11-19 10:14:03.916401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:49.861 [2024-11-19 10:14:03.916443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:49.861 [2024-11-19 10:14:03.916580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:49.861 [2024-11-19 10:14:03.916680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:49.861 [2024-11-19 10:14:03.916705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:49.861 [2024-11-19 10:14:03.916897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.861 pt2 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.861 "name": "raid_bdev1", 00:19:49.861 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:49.861 "strip_size_kb": 0, 00:19:49.861 "state": "online", 00:19:49.861 "raid_level": "raid1", 00:19:49.861 "superblock": true, 00:19:49.861 "num_base_bdevs": 2, 00:19:49.861 "num_base_bdevs_discovered": 1, 00:19:49.861 "num_base_bdevs_operational": 1, 00:19:49.861 "base_bdevs_list": [ 00:19:49.861 { 00:19:49.861 "name": null, 00:19:49.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.861 "is_configured": false, 00:19:49.861 "data_offset": 256, 00:19:49.861 "data_size": 7936 00:19:49.861 }, 00:19:49.861 { 00:19:49.861 "name": "pt2", 00:19:49.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.861 "is_configured": true, 00:19:49.861 "data_offset": 256, 00:19:49.861 "data_size": 7936 00:19:49.861 } 00:19:49.861 ] 00:19:49.861 }' 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.861 10:14:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 [2024-11-19 10:14:04.437078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.430 [2024-11-19 10:14:04.437125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.430 [2024-11-19 10:14:04.437256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.430 [2024-11-19 10:14:04.437341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.430 [2024-11-19 10:14:04.437359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 [2024-11-19 10:14:04.493166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:50.430 [2024-11-19 10:14:04.493288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.430 [2024-11-19 10:14:04.493328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:50.430 [2024-11-19 10:14:04.493346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.430 [2024-11-19 10:14:04.496226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.430 [2024-11-19 10:14:04.496276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:50.430 [2024-11-19 10:14:04.496369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:50.430 [2024-11-19 10:14:04.496454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:50.430 [2024-11-19 10:14:04.496603] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:50.430 [2024-11-19 10:14:04.496624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.430 [2024-11-19 10:14:04.496652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:50.430 [2024-11-19 10:14:04.496727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:50.430 [2024-11-19 10:14:04.496875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:50.430 [2024-11-19 10:14:04.496900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:50.430 [2024-11-19 10:14:04.497018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:50.430 [2024-11-19 10:14:04.497116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:50.430 [2024-11-19 10:14:04.497144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:50.430 [2024-11-19 10:14:04.497309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.430 pt1 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.430 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.431 "name": "raid_bdev1", 00:19:50.431 "uuid": "1866b1ba-0685-4ce0-a9f1-879a337f4f17", 00:19:50.431 "strip_size_kb": 0, 00:19:50.431 "state": "online", 00:19:50.431 "raid_level": "raid1", 00:19:50.431 "superblock": true, 00:19:50.431 "num_base_bdevs": 2, 00:19:50.431 "num_base_bdevs_discovered": 1, 00:19:50.431 "num_base_bdevs_operational": 1, 00:19:50.431 "base_bdevs_list": [ 00:19:50.431 { 00:19:50.431 "name": null, 00:19:50.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.431 "is_configured": false, 00:19:50.431 "data_offset": 256, 00:19:50.431 "data_size": 7936 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "name": "pt2", 00:19:50.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.431 "is_configured": true, 00:19:50.431 "data_offset": 256, 00:19:50.431 "data_size": 7936 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }' 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.431 10:14:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.998 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:50.998 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.998 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.998 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:50.998 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.998 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.999 [2024-11-19 10:14:05.061820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 1866b1ba-0685-4ce0-a9f1-879a337f4f17 '!=' 1866b1ba-0685-4ce0-a9f1-879a337f4f17 ']' 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89188 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89188 ']' 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89188 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89188 00:19:50.999 killing process with pid 89188 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89188' 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89188 00:19:50.999 [2024-11-19 10:14:05.131453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:50.999 [2024-11-19 10:14:05.131601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.999 10:14:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89188 00:19:50.999 [2024-11-19 10:14:05.131678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.999 [2024-11-19 10:14:05.131704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:51.256 [2024-11-19 10:14:05.334904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:52.190 10:14:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:52.190 00:19:52.190 real 0m6.939s 00:19:52.190 user 0m10.893s 00:19:52.190 sys 0m1.093s 00:19:52.190 ************************************ 00:19:52.190 END TEST raid_superblock_test_md_interleaved 00:19:52.190 ************************************ 00:19:52.190 10:14:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.190 10:14:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.449 10:14:06 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:52.449 10:14:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:52.449 10:14:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.449 10:14:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.449 ************************************ 00:19:52.449 START TEST raid_rebuild_test_sb_md_interleaved 00:19:52.449 ************************************ 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89516 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89516 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89516 ']' 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.449 10:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.449 [2024-11-19 10:14:06.565713] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:19:52.449 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:52.449 Zero copy mechanism will not be used. 00:19:52.449 [2024-11-19 10:14:06.566592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89516 ] 00:19:52.708 [2024-11-19 10:14:06.752357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.708 [2024-11-19 10:14:06.885974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.965 [2024-11-19 10:14:07.111081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.965 [2024-11-19 10:14:07.111200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 BaseBdev1_malloc 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 [2024-11-19 10:14:07.609409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:53.532 [2024-11-19 10:14:07.610070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.532 [2024-11-19 10:14:07.610277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:53.532 [2024-11-19 10:14:07.610335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.532 [2024-11-19 10:14:07.613467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.532 [2024-11-19 10:14:07.613758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:53.532 BaseBdev1 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 BaseBdev2_malloc 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 [2024-11-19 10:14:07.666277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:53.532 [2024-11-19 10:14:07.666863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.532 [2024-11-19 10:14:07.666917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:53.532 [2024-11-19 10:14:07.666942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.532 [2024-11-19 10:14:07.669793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.532 [2024-11-19 10:14:07.669864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:53.532 BaseBdev2 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 spare_malloc 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 spare_delay 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 [2024-11-19 10:14:07.744874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:53.532 [2024-11-19 10:14:07.744969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.532 [2024-11-19 10:14:07.745008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:53.532 [2024-11-19 10:14:07.745029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.532 [2024-11-19 10:14:07.747846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.532 [2024-11-19 10:14:07.748047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:53.532 spare 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.532 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 [2024-11-19 10:14:07.753000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.532 [2024-11-19 10:14:07.755679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:53.532 [2024-11-19 10:14:07.755991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:53.532 [2024-11-19 10:14:07.756015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:53.533 [2024-11-19 10:14:07.756144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:53.533 [2024-11-19 10:14:07.756257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:53.533 [2024-11-19 10:14:07.756272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:53.533 [2024-11-19 10:14:07.756386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.533 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.793 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.793 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.793 "name": "raid_bdev1", 00:19:53.793 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:53.793 "strip_size_kb": 0, 00:19:53.793 "state": "online", 00:19:53.793 "raid_level": "raid1", 00:19:53.793 "superblock": true, 00:19:53.793 "num_base_bdevs": 2, 00:19:53.793 "num_base_bdevs_discovered": 2, 00:19:53.793 "num_base_bdevs_operational": 2, 00:19:53.793 "base_bdevs_list": [ 00:19:53.793 { 00:19:53.793 "name": "BaseBdev1", 00:19:53.793 "uuid": "de698317-841f-5138-a0d8-7b43795da9c2", 00:19:53.793 "is_configured": true, 00:19:53.793 "data_offset": 256, 00:19:53.793 "data_size": 7936 00:19:53.793 }, 00:19:53.793 { 00:19:53.793 "name": "BaseBdev2", 00:19:53.793 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:53.793 "is_configured": true, 00:19:53.793 "data_offset": 256, 00:19:53.793 "data_size": 7936 00:19:53.793 } 00:19:53.793 ] 00:19:53.793 }' 00:19:53.793 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.793 10:14:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.052 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:54.052 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:54.052 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.052 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.052 [2024-11-19 10:14:08.277550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.310 [2024-11-19 10:14:08.377219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.310 "name": "raid_bdev1", 00:19:54.310 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:54.310 "strip_size_kb": 0, 00:19:54.310 "state": "online", 00:19:54.310 "raid_level": "raid1", 00:19:54.310 "superblock": true, 00:19:54.310 "num_base_bdevs": 2, 00:19:54.310 "num_base_bdevs_discovered": 1, 00:19:54.310 "num_base_bdevs_operational": 1, 00:19:54.310 "base_bdevs_list": [ 00:19:54.310 { 00:19:54.310 "name": null, 00:19:54.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.310 "is_configured": false, 00:19:54.310 "data_offset": 0, 00:19:54.310 "data_size": 7936 00:19:54.310 }, 00:19:54.310 { 00:19:54.310 "name": "BaseBdev2", 00:19:54.310 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:54.310 "is_configured": true, 00:19:54.310 "data_offset": 256, 00:19:54.310 "data_size": 7936 00:19:54.310 } 00:19:54.310 ] 00:19:54.310 }' 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.310 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.874 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:54.874 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.874 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.874 [2024-11-19 10:14:08.861361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.874 [2024-11-19 10:14:08.879269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:54.874 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.874 10:14:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:54.874 [2024-11-19 10:14:08.882155] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.811 "name": "raid_bdev1", 00:19:55.811 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:55.811 "strip_size_kb": 0, 00:19:55.811 "state": "online", 00:19:55.811 "raid_level": "raid1", 00:19:55.811 "superblock": true, 00:19:55.811 "num_base_bdevs": 2, 00:19:55.811 "num_base_bdevs_discovered": 2, 00:19:55.811 "num_base_bdevs_operational": 2, 00:19:55.811 "process": { 00:19:55.811 "type": "rebuild", 00:19:55.811 "target": "spare", 00:19:55.811 "progress": { 00:19:55.811 "blocks": 2304, 00:19:55.811 "percent": 29 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 "base_bdevs_list": [ 00:19:55.811 { 00:19:55.811 "name": "spare", 00:19:55.811 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:19:55.811 "is_configured": true, 00:19:55.811 "data_offset": 256, 00:19:55.811 "data_size": 7936 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "name": "BaseBdev2", 00:19:55.811 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:55.811 "is_configured": true, 00:19:55.811 "data_offset": 256, 00:19:55.811 "data_size": 7936 00:19:55.811 } 00:19:55.811 ] 00:19:55.811 }' 00:19:55.811 10:14:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.812 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.812 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.070 [2024-11-19 10:14:10.068107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.070 [2024-11-19 10:14:10.094225] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.070 [2024-11-19 10:14:10.094369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.070 [2024-11-19 10:14:10.094399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.070 [2024-11-19 10:14:10.094421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.070 "name": "raid_bdev1", 00:19:56.070 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:56.070 "strip_size_kb": 0, 00:19:56.070 "state": "online", 00:19:56.070 "raid_level": "raid1", 00:19:56.070 "superblock": true, 00:19:56.070 "num_base_bdevs": 2, 00:19:56.070 "num_base_bdevs_discovered": 1, 00:19:56.070 "num_base_bdevs_operational": 1, 00:19:56.070 "base_bdevs_list": [ 00:19:56.070 { 00:19:56.070 "name": null, 00:19:56.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.070 "is_configured": false, 00:19:56.070 "data_offset": 0, 00:19:56.070 "data_size": 7936 00:19:56.070 }, 00:19:56.070 { 00:19:56.070 "name": "BaseBdev2", 00:19:56.070 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:56.070 "is_configured": true, 00:19:56.070 "data_offset": 256, 00:19:56.070 "data_size": 7936 00:19:56.070 } 00:19:56.070 ] 00:19:56.070 }' 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.070 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.635 "name": "raid_bdev1", 00:19:56.635 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:56.635 "strip_size_kb": 0, 00:19:56.635 "state": "online", 00:19:56.635 "raid_level": "raid1", 00:19:56.635 "superblock": true, 00:19:56.635 "num_base_bdevs": 2, 00:19:56.635 "num_base_bdevs_discovered": 1, 00:19:56.635 "num_base_bdevs_operational": 1, 00:19:56.635 "base_bdevs_list": [ 00:19:56.635 { 00:19:56.635 "name": null, 00:19:56.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.635 "is_configured": false, 00:19:56.635 "data_offset": 0, 00:19:56.635 "data_size": 7936 00:19:56.635 }, 00:19:56.635 { 00:19:56.635 "name": "BaseBdev2", 00:19:56.635 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:56.635 "is_configured": true, 00:19:56.635 "data_offset": 256, 00:19:56.635 "data_size": 7936 00:19:56.635 } 00:19:56.635 ] 00:19:56.635 }' 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.635 [2024-11-19 10:14:10.781360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.635 [2024-11-19 10:14:10.798346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.635 10:14:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:56.635 [2024-11-19 10:14:10.801169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.011 "name": "raid_bdev1", 00:19:58.011 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:58.011 "strip_size_kb": 0, 00:19:58.011 "state": "online", 00:19:58.011 "raid_level": "raid1", 00:19:58.011 "superblock": true, 00:19:58.011 "num_base_bdevs": 2, 00:19:58.011 "num_base_bdevs_discovered": 2, 00:19:58.011 "num_base_bdevs_operational": 2, 00:19:58.011 "process": { 00:19:58.011 "type": "rebuild", 00:19:58.011 "target": "spare", 00:19:58.011 "progress": { 00:19:58.011 "blocks": 2560, 00:19:58.011 "percent": 32 00:19:58.011 } 00:19:58.011 }, 00:19:58.011 "base_bdevs_list": [ 00:19:58.011 { 00:19:58.011 "name": "spare", 00:19:58.011 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:19:58.011 "is_configured": true, 00:19:58.011 "data_offset": 256, 00:19:58.011 "data_size": 7936 00:19:58.011 }, 00:19:58.011 { 00:19:58.011 "name": "BaseBdev2", 00:19:58.011 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:58.011 "is_configured": true, 00:19:58.011 "data_offset": 256, 00:19:58.011 "data_size": 7936 00:19:58.011 } 00:19:58.011 ] 00:19:58.011 }' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:58.011 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=820 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.011 10:14:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.011 10:14:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.011 "name": "raid_bdev1", 00:19:58.011 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:58.011 "strip_size_kb": 0, 00:19:58.011 "state": "online", 00:19:58.011 "raid_level": "raid1", 00:19:58.011 "superblock": true, 00:19:58.011 "num_base_bdevs": 2, 00:19:58.011 "num_base_bdevs_discovered": 2, 00:19:58.011 "num_base_bdevs_operational": 2, 00:19:58.011 "process": { 00:19:58.011 "type": "rebuild", 00:19:58.011 "target": "spare", 00:19:58.011 "progress": { 00:19:58.011 "blocks": 2816, 00:19:58.011 "percent": 35 00:19:58.011 } 00:19:58.011 }, 00:19:58.011 "base_bdevs_list": [ 00:19:58.011 { 00:19:58.011 "name": "spare", 00:19:58.011 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:19:58.011 "is_configured": true, 00:19:58.011 "data_offset": 256, 00:19:58.011 "data_size": 7936 00:19:58.011 }, 00:19:58.011 { 00:19:58.011 "name": "BaseBdev2", 00:19:58.011 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:58.011 "is_configured": true, 00:19:58.011 "data_offset": 256, 00:19:58.011 "data_size": 7936 00:19:58.011 } 00:19:58.011 ] 00:19:58.011 }' 00:19:58.011 10:14:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.011 10:14:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.011 10:14:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.011 10:14:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.011 10:14:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.946 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.946 "name": "raid_bdev1", 00:19:58.946 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:19:58.946 "strip_size_kb": 0, 00:19:58.946 "state": "online", 00:19:58.946 "raid_level": "raid1", 00:19:58.946 "superblock": true, 00:19:58.946 "num_base_bdevs": 2, 00:19:58.946 "num_base_bdevs_discovered": 2, 00:19:58.946 "num_base_bdevs_operational": 2, 00:19:58.946 "process": { 00:19:58.946 "type": "rebuild", 00:19:58.946 "target": "spare", 00:19:58.946 "progress": { 00:19:58.946 "blocks": 5632, 00:19:58.946 "percent": 70 00:19:58.946 } 00:19:58.946 }, 00:19:58.946 "base_bdevs_list": [ 00:19:58.946 { 00:19:58.946 "name": "spare", 00:19:58.946 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:19:58.947 "is_configured": true, 00:19:58.947 "data_offset": 256, 00:19:58.947 "data_size": 7936 00:19:58.947 }, 00:19:58.947 { 00:19:58.947 "name": "BaseBdev2", 00:19:58.947 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:19:58.947 "is_configured": true, 00:19:58.947 "data_offset": 256, 00:19:58.947 "data_size": 7936 00:19:58.947 } 00:19:58.947 ] 00:19:58.947 }' 00:19:58.947 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.205 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.205 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.205 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.205 10:14:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.772 [2024-11-19 10:14:13.930693] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:59.772 [2024-11-19 10:14:13.930867] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:59.772 [2024-11-19 10:14:13.931074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.030 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.289 "name": "raid_bdev1", 00:20:00.289 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:00.289 "strip_size_kb": 0, 00:20:00.289 "state": "online", 00:20:00.289 "raid_level": "raid1", 00:20:00.289 "superblock": true, 00:20:00.289 "num_base_bdevs": 2, 00:20:00.289 "num_base_bdevs_discovered": 2, 00:20:00.289 "num_base_bdevs_operational": 2, 00:20:00.289 "base_bdevs_list": [ 00:20:00.289 { 00:20:00.289 "name": "spare", 00:20:00.289 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 256, 00:20:00.289 "data_size": 7936 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "name": "BaseBdev2", 00:20:00.289 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 256, 00:20:00.289 "data_size": 7936 00:20:00.289 } 00:20:00.289 ] 00:20:00.289 }' 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.289 "name": "raid_bdev1", 00:20:00.289 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:00.289 "strip_size_kb": 0, 00:20:00.289 "state": "online", 00:20:00.289 "raid_level": "raid1", 00:20:00.289 "superblock": true, 00:20:00.289 "num_base_bdevs": 2, 00:20:00.289 "num_base_bdevs_discovered": 2, 00:20:00.289 "num_base_bdevs_operational": 2, 00:20:00.289 "base_bdevs_list": [ 00:20:00.289 { 00:20:00.289 "name": "spare", 00:20:00.289 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 256, 00:20:00.289 "data_size": 7936 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "name": "BaseBdev2", 00:20:00.289 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 256, 00:20:00.289 "data_size": 7936 00:20:00.289 } 00:20:00.289 ] 00:20:00.289 }' 00:20:00.289 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.548 "name": "raid_bdev1", 00:20:00.548 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:00.548 "strip_size_kb": 0, 00:20:00.548 "state": "online", 00:20:00.548 "raid_level": "raid1", 00:20:00.548 "superblock": true, 00:20:00.548 "num_base_bdevs": 2, 00:20:00.548 "num_base_bdevs_discovered": 2, 00:20:00.548 "num_base_bdevs_operational": 2, 00:20:00.548 "base_bdevs_list": [ 00:20:00.548 { 00:20:00.548 "name": "spare", 00:20:00.548 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:00.548 "is_configured": true, 00:20:00.548 "data_offset": 256, 00:20:00.548 "data_size": 7936 00:20:00.548 }, 00:20:00.548 { 00:20:00.548 "name": "BaseBdev2", 00:20:00.548 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:00.548 "is_configured": true, 00:20:00.548 "data_offset": 256, 00:20:00.548 "data_size": 7936 00:20:00.548 } 00:20:00.548 ] 00:20:00.548 }' 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.548 10:14:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 [2024-11-19 10:14:15.045763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:01.115 [2024-11-19 10:14:15.045978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.115 [2024-11-19 10:14:15.046247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.115 [2024-11-19 10:14:15.046494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.115 [2024-11-19 10:14:15.046632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 [2024-11-19 10:14:15.109769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:01.115 [2024-11-19 10:14:15.109883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.115 [2024-11-19 10:14:15.109924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:01.115 [2024-11-19 10:14:15.109939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.115 [2024-11-19 10:14:15.112864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.115 [2024-11-19 10:14:15.112914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:01.115 [2024-11-19 10:14:15.113025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:01.115 [2024-11-19 10:14:15.113114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.115 [2024-11-19 10:14:15.113289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:01.115 spare 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 [2024-11-19 10:14:15.213442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:01.115 [2024-11-19 10:14:15.213540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:01.115 [2024-11-19 10:14:15.213733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:01.115 [2024-11-19 10:14:15.213934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:01.115 [2024-11-19 10:14:15.213951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:01.115 [2024-11-19 10:14:15.214135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.115 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.116 "name": "raid_bdev1", 00:20:01.116 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:01.116 "strip_size_kb": 0, 00:20:01.116 "state": "online", 00:20:01.116 "raid_level": "raid1", 00:20:01.116 "superblock": true, 00:20:01.116 "num_base_bdevs": 2, 00:20:01.116 "num_base_bdevs_discovered": 2, 00:20:01.116 "num_base_bdevs_operational": 2, 00:20:01.116 "base_bdevs_list": [ 00:20:01.116 { 00:20:01.116 "name": "spare", 00:20:01.116 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:01.116 "is_configured": true, 00:20:01.116 "data_offset": 256, 00:20:01.116 "data_size": 7936 00:20:01.116 }, 00:20:01.116 { 00:20:01.116 "name": "BaseBdev2", 00:20:01.116 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:01.116 "is_configured": true, 00:20:01.116 "data_offset": 256, 00:20:01.116 "data_size": 7936 00:20:01.116 } 00:20:01.116 ] 00:20:01.116 }' 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.116 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.686 "name": "raid_bdev1", 00:20:01.686 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:01.686 "strip_size_kb": 0, 00:20:01.686 "state": "online", 00:20:01.686 "raid_level": "raid1", 00:20:01.686 "superblock": true, 00:20:01.686 "num_base_bdevs": 2, 00:20:01.686 "num_base_bdevs_discovered": 2, 00:20:01.686 "num_base_bdevs_operational": 2, 00:20:01.686 "base_bdevs_list": [ 00:20:01.686 { 00:20:01.686 "name": "spare", 00:20:01.686 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:01.686 "is_configured": true, 00:20:01.686 "data_offset": 256, 00:20:01.686 "data_size": 7936 00:20:01.686 }, 00:20:01.686 { 00:20:01.686 "name": "BaseBdev2", 00:20:01.686 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:01.686 "is_configured": true, 00:20:01.686 "data_offset": 256, 00:20:01.686 "data_size": 7936 00:20:01.686 } 00:20:01.686 ] 00:20:01.686 }' 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:01.686 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-11-19 10:14:15.938439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.957 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.957 "name": "raid_bdev1", 00:20:01.957 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:01.957 "strip_size_kb": 0, 00:20:01.957 "state": "online", 00:20:01.957 "raid_level": "raid1", 00:20:01.957 "superblock": true, 00:20:01.957 "num_base_bdevs": 2, 00:20:01.957 "num_base_bdevs_discovered": 1, 00:20:01.957 "num_base_bdevs_operational": 1, 00:20:01.957 "base_bdevs_list": [ 00:20:01.957 { 00:20:01.957 "name": null, 00:20:01.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.957 "is_configured": false, 00:20:01.957 "data_offset": 0, 00:20:01.957 "data_size": 7936 00:20:01.957 }, 00:20:01.958 { 00:20:01.958 "name": "BaseBdev2", 00:20:01.958 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:01.958 "is_configured": true, 00:20:01.958 "data_offset": 256, 00:20:01.958 "data_size": 7936 00:20:01.958 } 00:20:01.958 ] 00:20:01.958 }' 00:20:01.958 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.958 10:14:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.526 10:14:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:02.526 10:14:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.526 10:14:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.526 [2024-11-19 10:14:16.482613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:02.526 [2024-11-19 10:14:16.482941] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:02.526 [2024-11-19 10:14:16.482970] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:02.526 [2024-11-19 10:14:16.483025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:02.526 [2024-11-19 10:14:16.499488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:02.526 10:14:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.526 10:14:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:02.526 [2024-11-19 10:14:16.502240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.464 "name": "raid_bdev1", 00:20:03.464 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:03.464 "strip_size_kb": 0, 00:20:03.464 "state": "online", 00:20:03.464 "raid_level": "raid1", 00:20:03.464 "superblock": true, 00:20:03.464 "num_base_bdevs": 2, 00:20:03.464 "num_base_bdevs_discovered": 2, 00:20:03.464 "num_base_bdevs_operational": 2, 00:20:03.464 "process": { 00:20:03.464 "type": "rebuild", 00:20:03.464 "target": "spare", 00:20:03.464 "progress": { 00:20:03.464 "blocks": 2560, 00:20:03.464 "percent": 32 00:20:03.464 } 00:20:03.464 }, 00:20:03.464 "base_bdevs_list": [ 00:20:03.464 { 00:20:03.464 "name": "spare", 00:20:03.464 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:03.464 "is_configured": true, 00:20:03.464 "data_offset": 256, 00:20:03.464 "data_size": 7936 00:20:03.464 }, 00:20:03.464 { 00:20:03.464 "name": "BaseBdev2", 00:20:03.464 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:03.464 "is_configured": true, 00:20:03.464 "data_offset": 256, 00:20:03.464 "data_size": 7936 00:20:03.464 } 00:20:03.464 ] 00:20:03.464 }' 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.464 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.464 [2024-11-19 10:14:17.660102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:03.724 [2024-11-19 10:14:17.713758] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:03.724 [2024-11-19 10:14:17.713878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.724 [2024-11-19 10:14:17.713905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:03.724 [2024-11-19 10:14:17.713922] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.724 "name": "raid_bdev1", 00:20:03.724 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:03.724 "strip_size_kb": 0, 00:20:03.724 "state": "online", 00:20:03.724 "raid_level": "raid1", 00:20:03.724 "superblock": true, 00:20:03.724 "num_base_bdevs": 2, 00:20:03.724 "num_base_bdevs_discovered": 1, 00:20:03.724 "num_base_bdevs_operational": 1, 00:20:03.724 "base_bdevs_list": [ 00:20:03.724 { 00:20:03.724 "name": null, 00:20:03.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.724 "is_configured": false, 00:20:03.724 "data_offset": 0, 00:20:03.724 "data_size": 7936 00:20:03.724 }, 00:20:03.724 { 00:20:03.724 "name": "BaseBdev2", 00:20:03.724 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:03.724 "is_configured": true, 00:20:03.724 "data_offset": 256, 00:20:03.724 "data_size": 7936 00:20:03.724 } 00:20:03.724 ] 00:20:03.724 }' 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.724 10:14:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.291 10:14:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:04.291 10:14:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.291 10:14:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.291 [2024-11-19 10:14:18.304101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.291 [2024-11-19 10:14:18.304197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.291 [2024-11-19 10:14:18.304242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:04.291 [2024-11-19 10:14:18.304263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.291 [2024-11-19 10:14:18.304583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.291 [2024-11-19 10:14:18.304615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.291 [2024-11-19 10:14:18.304701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:04.291 [2024-11-19 10:14:18.304726] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.291 [2024-11-19 10:14:18.304742] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:04.291 [2024-11-19 10:14:18.304798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:04.291 [2024-11-19 10:14:18.321416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:04.291 spare 00:20:04.291 10:14:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.291 10:14:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:04.291 [2024-11-19 10:14:18.324164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.226 "name": "raid_bdev1", 00:20:05.226 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:05.226 "strip_size_kb": 0, 00:20:05.226 "state": "online", 00:20:05.226 "raid_level": "raid1", 00:20:05.226 "superblock": true, 00:20:05.226 "num_base_bdevs": 2, 00:20:05.226 "num_base_bdevs_discovered": 2, 00:20:05.226 "num_base_bdevs_operational": 2, 00:20:05.226 "process": { 00:20:05.226 "type": "rebuild", 00:20:05.226 "target": "spare", 00:20:05.226 "progress": { 00:20:05.226 "blocks": 2560, 00:20:05.226 "percent": 32 00:20:05.226 } 00:20:05.226 }, 00:20:05.226 "base_bdevs_list": [ 00:20:05.226 { 00:20:05.226 "name": "spare", 00:20:05.226 "uuid": "2ae20da1-6f33-5c04-be91-e18aa3836538", 00:20:05.226 "is_configured": true, 00:20:05.226 "data_offset": 256, 00:20:05.226 "data_size": 7936 00:20:05.226 }, 00:20:05.226 { 00:20:05.226 "name": "BaseBdev2", 00:20:05.226 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:05.226 "is_configured": true, 00:20:05.226 "data_offset": 256, 00:20:05.226 "data_size": 7936 00:20:05.226 } 00:20:05.226 ] 00:20:05.226 }' 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.226 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.485 [2024-11-19 10:14:19.474018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.485 [2024-11-19 10:14:19.535808] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:05.485 [2024-11-19 10:14:19.536441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.485 [2024-11-19 10:14:19.536606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.485 [2024-11-19 10:14:19.536757] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.485 "name": "raid_bdev1", 00:20:05.485 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:05.485 "strip_size_kb": 0, 00:20:05.485 "state": "online", 00:20:05.485 "raid_level": "raid1", 00:20:05.485 "superblock": true, 00:20:05.485 "num_base_bdevs": 2, 00:20:05.485 "num_base_bdevs_discovered": 1, 00:20:05.485 "num_base_bdevs_operational": 1, 00:20:05.485 "base_bdevs_list": [ 00:20:05.485 { 00:20:05.485 "name": null, 00:20:05.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.485 "is_configured": false, 00:20:05.485 "data_offset": 0, 00:20:05.485 "data_size": 7936 00:20:05.485 }, 00:20:05.485 { 00:20:05.485 "name": "BaseBdev2", 00:20:05.485 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:05.485 "is_configured": true, 00:20:05.485 "data_offset": 256, 00:20:05.485 "data_size": 7936 00:20:05.485 } 00:20:05.485 ] 00:20:05.485 }' 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.485 10:14:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.053 "name": "raid_bdev1", 00:20:06.053 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:06.053 "strip_size_kb": 0, 00:20:06.053 "state": "online", 00:20:06.053 "raid_level": "raid1", 00:20:06.053 "superblock": true, 00:20:06.053 "num_base_bdevs": 2, 00:20:06.053 "num_base_bdevs_discovered": 1, 00:20:06.053 "num_base_bdevs_operational": 1, 00:20:06.053 "base_bdevs_list": [ 00:20:06.053 { 00:20:06.053 "name": null, 00:20:06.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.053 "is_configured": false, 00:20:06.053 "data_offset": 0, 00:20:06.053 "data_size": 7936 00:20:06.053 }, 00:20:06.053 { 00:20:06.053 "name": "BaseBdev2", 00:20:06.053 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:06.053 "is_configured": true, 00:20:06.053 "data_offset": 256, 00:20:06.053 "data_size": 7936 00:20:06.053 } 00:20:06.053 ] 00:20:06.053 }' 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.053 [2024-11-19 10:14:20.199148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:06.053 [2024-11-19 10:14:20.199363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.053 [2024-11-19 10:14:20.199414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:06.053 [2024-11-19 10:14:20.199431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.053 [2024-11-19 10:14:20.199674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.053 [2024-11-19 10:14:20.199696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:06.053 [2024-11-19 10:14:20.199775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:06.053 [2024-11-19 10:14:20.199815] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:06.053 [2024-11-19 10:14:20.199831] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:06.053 [2024-11-19 10:14:20.199853] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:06.053 BaseBdev1 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.053 10:14:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.990 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.249 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.249 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.249 "name": "raid_bdev1", 00:20:07.249 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:07.249 "strip_size_kb": 0, 00:20:07.249 "state": "online", 00:20:07.249 "raid_level": "raid1", 00:20:07.249 "superblock": true, 00:20:07.249 "num_base_bdevs": 2, 00:20:07.249 "num_base_bdevs_discovered": 1, 00:20:07.249 "num_base_bdevs_operational": 1, 00:20:07.249 "base_bdevs_list": [ 00:20:07.249 { 00:20:07.249 "name": null, 00:20:07.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.249 "is_configured": false, 00:20:07.249 "data_offset": 0, 00:20:07.249 "data_size": 7936 00:20:07.249 }, 00:20:07.249 { 00:20:07.249 "name": "BaseBdev2", 00:20:07.249 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:07.249 "is_configured": true, 00:20:07.249 "data_offset": 256, 00:20:07.249 "data_size": 7936 00:20:07.249 } 00:20:07.249 ] 00:20:07.249 }' 00:20:07.249 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.249 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.508 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.767 "name": "raid_bdev1", 00:20:07.767 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:07.767 "strip_size_kb": 0, 00:20:07.767 "state": "online", 00:20:07.767 "raid_level": "raid1", 00:20:07.767 "superblock": true, 00:20:07.767 "num_base_bdevs": 2, 00:20:07.767 "num_base_bdevs_discovered": 1, 00:20:07.767 "num_base_bdevs_operational": 1, 00:20:07.767 "base_bdevs_list": [ 00:20:07.767 { 00:20:07.767 "name": null, 00:20:07.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.767 "is_configured": false, 00:20:07.767 "data_offset": 0, 00:20:07.767 "data_size": 7936 00:20:07.767 }, 00:20:07.767 { 00:20:07.767 "name": "BaseBdev2", 00:20:07.767 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:07.767 "is_configured": true, 00:20:07.767 "data_offset": 256, 00:20:07.767 "data_size": 7936 00:20:07.767 } 00:20:07.767 ] 00:20:07.767 }' 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.767 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.767 [2024-11-19 10:14:21.903737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.767 [2024-11-19 10:14:21.904001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:07.767 [2024-11-19 10:14:21.904031] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:07.767 request: 00:20:07.767 { 00:20:07.767 "base_bdev": "BaseBdev1", 00:20:07.767 "raid_bdev": "raid_bdev1", 00:20:07.767 "method": "bdev_raid_add_base_bdev", 00:20:07.768 "req_id": 1 00:20:07.768 } 00:20:07.768 Got JSON-RPC error response 00:20:07.768 response: 00:20:07.768 { 00:20:07.768 "code": -22, 00:20:07.768 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:07.768 } 00:20:07.768 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:07.768 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:07.768 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.768 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.768 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.768 10:14:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.992 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.992 "name": "raid_bdev1", 00:20:08.992 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:08.992 "strip_size_kb": 0, 00:20:08.992 "state": "online", 00:20:08.992 "raid_level": "raid1", 00:20:08.992 "superblock": true, 00:20:08.992 "num_base_bdevs": 2, 00:20:08.992 "num_base_bdevs_discovered": 1, 00:20:08.992 "num_base_bdevs_operational": 1, 00:20:08.992 "base_bdevs_list": [ 00:20:08.992 { 00:20:08.992 "name": null, 00:20:08.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.992 "is_configured": false, 00:20:08.992 "data_offset": 0, 00:20:08.992 "data_size": 7936 00:20:08.992 }, 00:20:08.992 { 00:20:08.992 "name": "BaseBdev2", 00:20:08.992 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:08.992 "is_configured": true, 00:20:08.992 "data_offset": 256, 00:20:08.992 "data_size": 7936 00:20:08.992 } 00:20:08.992 ] 00:20:08.992 }' 00:20:08.992 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.992 10:14:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.251 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.251 "name": "raid_bdev1", 00:20:09.251 "uuid": "c41fba12-dbd3-416d-b210-65136bec3b2c", 00:20:09.251 "strip_size_kb": 0, 00:20:09.251 "state": "online", 00:20:09.251 "raid_level": "raid1", 00:20:09.251 "superblock": true, 00:20:09.251 "num_base_bdevs": 2, 00:20:09.251 "num_base_bdevs_discovered": 1, 00:20:09.251 "num_base_bdevs_operational": 1, 00:20:09.251 "base_bdevs_list": [ 00:20:09.251 { 00:20:09.251 "name": null, 00:20:09.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.251 "is_configured": false, 00:20:09.251 "data_offset": 0, 00:20:09.251 "data_size": 7936 00:20:09.251 }, 00:20:09.251 { 00:20:09.251 "name": "BaseBdev2", 00:20:09.251 "uuid": "0f4c6980-2524-5c9a-b91d-d96985a18a0b", 00:20:09.251 "is_configured": true, 00:20:09.251 "data_offset": 256, 00:20:09.251 "data_size": 7936 00:20:09.251 } 00:20:09.251 ] 00:20:09.251 }' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89516 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89516 ']' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89516 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89516 00:20:09.510 killing process with pid 89516 00:20:09.510 Received shutdown signal, test time was about 60.000000 seconds 00:20:09.510 00:20:09.510 Latency(us) 00:20:09.510 [2024-11-19T10:14:23.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.510 [2024-11-19T10:14:23.742Z] =================================================================================================================== 00:20:09.510 [2024-11-19T10:14:23.742Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89516' 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89516 00:20:09.510 10:14:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89516 00:20:09.510 [2024-11-19 10:14:23.627653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.510 [2024-11-19 10:14:23.627854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.510 [2024-11-19 10:14:23.627931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.510 [2024-11-19 10:14:23.627956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:09.769 [2024-11-19 10:14:23.913917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:11.145 ************************************ 00:20:11.145 END TEST raid_rebuild_test_sb_md_interleaved 00:20:11.145 ************************************ 00:20:11.145 10:14:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:11.145 00:20:11.145 real 0m18.561s 00:20:11.145 user 0m25.122s 00:20:11.145 sys 0m1.494s 00:20:11.145 10:14:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.145 10:14:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.145 10:14:25 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:11.145 10:14:25 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:11.145 10:14:25 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89516 ']' 00:20:11.145 10:14:25 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89516 00:20:11.145 10:14:25 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:11.145 ************************************ 00:20:11.145 END TEST bdev_raid 00:20:11.145 ************************************ 00:20:11.145 00:20:11.145 real 13m23.429s 00:20:11.145 user 18m43.336s 00:20:11.145 sys 1m55.210s 00:20:11.145 10:14:25 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.145 10:14:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.145 10:14:25 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:11.145 10:14:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:11.145 10:14:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.145 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:20:11.145 ************************************ 00:20:11.145 START TEST spdkcli_raid 00:20:11.145 ************************************ 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:11.145 * Looking for test storage... 00:20:11.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.145 10:14:25 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:11.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.145 --rc genhtml_branch_coverage=1 00:20:11.145 --rc genhtml_function_coverage=1 00:20:11.145 --rc genhtml_legend=1 00:20:11.145 --rc geninfo_all_blocks=1 00:20:11.145 --rc geninfo_unexecuted_blocks=1 00:20:11.145 00:20:11.145 ' 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:11.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.145 --rc genhtml_branch_coverage=1 00:20:11.145 --rc genhtml_function_coverage=1 00:20:11.145 --rc genhtml_legend=1 00:20:11.145 --rc geninfo_all_blocks=1 00:20:11.145 --rc geninfo_unexecuted_blocks=1 00:20:11.145 00:20:11.145 ' 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:11.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.145 --rc genhtml_branch_coverage=1 00:20:11.145 --rc genhtml_function_coverage=1 00:20:11.145 --rc genhtml_legend=1 00:20:11.145 --rc geninfo_all_blocks=1 00:20:11.145 --rc geninfo_unexecuted_blocks=1 00:20:11.145 00:20:11.145 ' 00:20:11.145 10:14:25 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:11.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.145 --rc genhtml_branch_coverage=1 00:20:11.145 --rc genhtml_function_coverage=1 00:20:11.145 --rc genhtml_legend=1 00:20:11.145 --rc geninfo_all_blocks=1 00:20:11.145 --rc geninfo_unexecuted_blocks=1 00:20:11.145 00:20:11.145 ' 00:20:11.145 10:14:25 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:11.145 10:14:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:11.145 10:14:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:11.145 10:14:25 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:11.145 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:11.146 10:14:25 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90201 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:11.146 10:14:25 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90201 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90201 ']' 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.146 10:14:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.404 [2024-11-19 10:14:25.494512] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:11.404 [2024-11-19 10:14:25.494706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90201 ] 00:20:11.663 [2024-11-19 10:14:25.688734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:11.922 [2024-11-19 10:14:25.898017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.922 [2024-11-19 10:14:25.898027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.858 10:14:26 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.858 10:14:26 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:12.858 10:14:26 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:12.858 10:14:26 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.858 10:14:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.858 10:14:26 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:12.858 10:14:26 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.858 10:14:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.858 10:14:26 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:12.858 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:12.858 ' 00:20:14.761 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:14.761 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:14.761 10:14:28 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:14.761 10:14:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.761 10:14:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.761 10:14:28 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:14.761 10:14:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.761 10:14:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.761 10:14:28 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:14.761 ' 00:20:15.696 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:15.696 10:14:29 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:15.696 10:14:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.696 10:14:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.954 10:14:29 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:15.954 10:14:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.954 10:14:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.954 10:14:29 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:15.954 10:14:29 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:16.520 10:14:30 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:16.520 10:14:30 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:16.520 10:14:30 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:16.520 10:14:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.520 10:14:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.520 10:14:30 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:16.520 10:14:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.520 10:14:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.520 10:14:30 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:16.520 ' 00:20:17.896 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:17.896 10:14:31 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:17.896 10:14:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.896 10:14:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:17.896 10:14:31 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:17.896 10:14:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.896 10:14:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:17.896 10:14:31 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:17.896 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:17.896 ' 00:20:19.298 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:19.298 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:19.298 10:14:33 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:19.298 10:14:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.298 10:14:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 10:14:33 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90201 00:20:19.556 10:14:33 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90201 ']' 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90201 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90201 00:20:19.557 killing process with pid 90201 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90201' 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90201 00:20:19.557 10:14:33 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90201 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90201 ']' 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90201 00:20:22.090 10:14:35 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90201 ']' 00:20:22.090 Process with pid 90201 is not found 00:20:22.090 10:14:35 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90201 00:20:22.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90201) - No such process 00:20:22.090 10:14:35 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90201 is not found' 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:22.090 10:14:35 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:22.090 ************************************ 00:20:22.090 END TEST spdkcli_raid 00:20:22.090 ************************************ 00:20:22.090 00:20:22.090 real 0m10.838s 00:20:22.090 user 0m22.497s 00:20:22.090 sys 0m1.318s 00:20:22.090 10:14:35 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.090 10:14:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:22.090 10:14:36 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:22.090 10:14:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.090 10:14:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.090 10:14:36 -- common/autotest_common.sh@10 -- # set +x 00:20:22.090 ************************************ 00:20:22.090 START TEST blockdev_raid5f 00:20:22.090 ************************************ 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:22.090 * Looking for test storage... 00:20:22.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.090 10:14:36 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.090 --rc genhtml_branch_coverage=1 00:20:22.090 --rc genhtml_function_coverage=1 00:20:22.090 --rc genhtml_legend=1 00:20:22.090 --rc geninfo_all_blocks=1 00:20:22.090 --rc geninfo_unexecuted_blocks=1 00:20:22.090 00:20:22.090 ' 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.090 --rc genhtml_branch_coverage=1 00:20:22.090 --rc genhtml_function_coverage=1 00:20:22.090 --rc genhtml_legend=1 00:20:22.090 --rc geninfo_all_blocks=1 00:20:22.090 --rc geninfo_unexecuted_blocks=1 00:20:22.090 00:20:22.090 ' 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.090 --rc genhtml_branch_coverage=1 00:20:22.090 --rc genhtml_function_coverage=1 00:20:22.090 --rc genhtml_legend=1 00:20:22.090 --rc geninfo_all_blocks=1 00:20:22.090 --rc geninfo_unexecuted_blocks=1 00:20:22.090 00:20:22.090 ' 00:20:22.090 10:14:36 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.090 --rc genhtml_branch_coverage=1 00:20:22.090 --rc genhtml_function_coverage=1 00:20:22.090 --rc genhtml_legend=1 00:20:22.090 --rc geninfo_all_blocks=1 00:20:22.090 --rc geninfo_unexecuted_blocks=1 00:20:22.090 00:20:22.090 ' 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:22.090 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:22.091 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:22.091 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:22.091 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90483 00:20:22.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.091 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:22.091 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90483 00:20:22.091 10:14:36 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:22.091 10:14:36 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90483 ']' 00:20:22.091 10:14:36 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.091 10:14:36 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.091 10:14:36 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.091 10:14:36 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.091 10:14:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.349 [2024-11-19 10:14:36.389452] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:22.349 [2024-11-19 10:14:36.389912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90483 ] 00:20:22.608 [2024-11-19 10:14:36.583567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.608 [2024-11-19 10:14:36.732769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.604 10:14:37 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.604 10:14:37 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:23.604 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:23.604 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:23.604 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:23.604 10:14:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.604 10:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.604 Malloc0 00:20:23.604 Malloc1 00:20:23.604 Malloc2 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:23.864 10:14:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fa9b4107-d049-4f5e-8975-87b94c22fa0b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fa9b4107-d049-4f5e-8975-87b94c22fa0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fa9b4107-d049-4f5e-8975-87b94c22fa0b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2b41a403-8329-4958-ae0c-7202b473cd49",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "65da4e73-4e6d-4dab-8ef4-5d59377dd201",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3f1804ee-90e4-421d-ab4e-1c40e5b0cd5a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:23.864 10:14:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:23.864 10:14:38 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:23.864 10:14:38 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:23.864 10:14:38 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:23.864 10:14:38 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90483 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90483 ']' 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90483 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90483 00:20:23.864 killing process with pid 90483 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90483' 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90483 00:20:23.864 10:14:38 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90483 00:20:27.150 10:14:40 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:27.150 10:14:40 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:27.150 10:14:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:27.150 10:14:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.150 10:14:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.150 ************************************ 00:20:27.150 START TEST bdev_hello_world 00:20:27.150 ************************************ 00:20:27.150 10:14:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:27.150 [2024-11-19 10:14:40.878924] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:27.150 [2024-11-19 10:14:40.879116] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90550 ] 00:20:27.150 [2024-11-19 10:14:41.055374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.150 [2024-11-19 10:14:41.201974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.717 [2024-11-19 10:14:41.778428] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:27.717 [2024-11-19 10:14:41.778511] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:27.717 [2024-11-19 10:14:41.778545] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:27.717 [2024-11-19 10:14:41.779266] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:27.717 [2024-11-19 10:14:41.779501] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:27.717 [2024-11-19 10:14:41.779547] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:27.717 [2024-11-19 10:14:41.779633] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:27.717 00:20:27.717 [2024-11-19 10:14:41.779666] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:29.094 ************************************ 00:20:29.094 END TEST bdev_hello_world 00:20:29.094 ************************************ 00:20:29.094 00:20:29.094 real 0m2.406s 00:20:29.094 user 0m1.916s 00:20:29.094 sys 0m0.363s 00:20:29.094 10:14:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.094 10:14:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:29.094 10:14:43 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:29.094 10:14:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:29.094 10:14:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.094 10:14:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:29.094 ************************************ 00:20:29.094 START TEST bdev_bounds 00:20:29.094 ************************************ 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90592 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90592' 00:20:29.094 Process bdevio pid: 90592 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90592 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90592 ']' 00:20:29.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.094 10:14:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:29.352 [2024-11-19 10:14:43.356855] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:29.353 [2024-11-19 10:14:43.357381] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90592 ] 00:20:29.353 [2024-11-19 10:14:43.551830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.612 [2024-11-19 10:14:43.741015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.612 [2024-11-19 10:14:43.741181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.612 [2024-11-19 10:14:43.741198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.547 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.547 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:30.547 10:14:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:30.547 I/O targets: 00:20:30.547 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:30.547 00:20:30.547 00:20:30.547 CUnit - A unit testing framework for C - Version 2.1-3 00:20:30.547 http://cunit.sourceforge.net/ 00:20:30.547 00:20:30.547 00:20:30.547 Suite: bdevio tests on: raid5f 00:20:30.547 Test: blockdev write read block ...passed 00:20:30.547 Test: blockdev write zeroes read block ...passed 00:20:30.547 Test: blockdev write zeroes read no split ...passed 00:20:30.547 Test: blockdev write zeroes read split ...passed 00:20:30.806 Test: blockdev write zeroes read split partial ...passed 00:20:30.806 Test: blockdev reset ...passed 00:20:30.806 Test: blockdev write read 8 blocks ...passed 00:20:30.806 Test: blockdev write read size > 128k ...passed 00:20:30.806 Test: blockdev write read invalid size ...passed 00:20:30.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:30.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:30.806 Test: blockdev write read max offset ...passed 00:20:30.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:30.806 Test: blockdev writev readv 8 blocks ...passed 00:20:30.806 Test: blockdev writev readv 30 x 1block ...passed 00:20:30.806 Test: blockdev writev readv block ...passed 00:20:30.806 Test: blockdev writev readv size > 128k ...passed 00:20:30.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:30.806 Test: blockdev comparev and writev ...passed 00:20:30.806 Test: blockdev nvme passthru rw ...passed 00:20:30.806 Test: blockdev nvme passthru vendor specific ...passed 00:20:30.806 Test: blockdev nvme admin passthru ...passed 00:20:30.806 Test: blockdev copy ...passed 00:20:30.806 00:20:30.806 Run Summary: Type Total Ran Passed Failed Inactive 00:20:30.806 suites 1 1 n/a 0 0 00:20:30.806 tests 23 23 23 0 0 00:20:30.806 asserts 130 130 130 0 n/a 00:20:30.806 00:20:30.806 Elapsed time = 0.613 seconds 00:20:30.806 0 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90592 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90592 ']' 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90592 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90592 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90592' 00:20:30.806 killing process with pid 90592 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90592 00:20:30.806 10:14:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90592 00:20:32.182 10:14:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:32.182 00:20:32.182 real 0m3.044s 00:20:32.182 user 0m7.441s 00:20:32.182 sys 0m0.516s 00:20:32.182 10:14:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.182 ************************************ 00:20:32.182 END TEST bdev_bounds 00:20:32.182 ************************************ 00:20:32.182 10:14:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:32.182 10:14:46 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:32.182 10:14:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:32.182 10:14:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.182 10:14:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:32.182 ************************************ 00:20:32.182 START TEST bdev_nbd 00:20:32.182 ************************************ 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90654 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90654 /var/tmp/spdk-nbd.sock 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90654 ']' 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:32.182 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:32.183 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.183 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:32.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:32.183 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.183 10:14:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:32.441 [2024-11-19 10:14:46.464638] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:32.441 [2024-11-19 10:14:46.465426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.441 [2024-11-19 10:14:46.668108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.699 [2024-11-19 10:14:46.818479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:33.266 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:33.833 1+0 records in 00:20:33.833 1+0 records out 00:20:33.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032415 s, 12.6 MB/s 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:33.833 10:14:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:34.092 { 00:20:34.092 "nbd_device": "/dev/nbd0", 00:20:34.092 "bdev_name": "raid5f" 00:20:34.092 } 00:20:34.092 ]' 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:34.092 { 00:20:34.092 "nbd_device": "/dev/nbd0", 00:20:34.092 "bdev_name": "raid5f" 00:20:34.092 } 00:20:34.092 ]' 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.092 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.351 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.685 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.686 10:14:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:35.253 /dev/nbd0 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.253 1+0 records in 00:20:35.253 1+0 records out 00:20:35.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550468 s, 7.4 MB/s 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:35.253 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:35.512 { 00:20:35.512 "nbd_device": "/dev/nbd0", 00:20:35.512 "bdev_name": "raid5f" 00:20:35.512 } 00:20:35.512 ]' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:35.512 { 00:20:35.512 "nbd_device": "/dev/nbd0", 00:20:35.512 "bdev_name": "raid5f" 00:20:35.512 } 00:20:35.512 ]' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:35.512 256+0 records in 00:20:35.512 256+0 records out 00:20:35.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449516 s, 233 MB/s 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:35.512 256+0 records in 00:20:35.512 256+0 records out 00:20:35.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.039826 s, 26.3 MB/s 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.512 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:35.771 10:14:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:36.030 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:36.030 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:36.030 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:36.030 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:36.288 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:36.289 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:36.289 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:36.289 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:36.547 malloc_lvol_verify 00:20:36.547 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:36.805 104aba56-c8ff-4ca6-9e75-035fb003fe32 00:20:36.805 10:14:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:37.063 023cdf6f-eabb-41ed-b6d5-e230761788b5 00:20:37.063 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:37.322 /dev/nbd0 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:37.322 mke2fs 1.47.0 (5-Feb-2023) 00:20:37.322 Discarding device blocks: 0/4096 done 00:20:37.322 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:37.322 00:20:37.322 Allocating group tables: 0/1 done 00:20:37.322 Writing inode tables: 0/1 done 00:20:37.322 Creating journal (1024 blocks): done 00:20:37.322 Writing superblocks and filesystem accounting information: 0/1 done 00:20:37.322 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.322 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90654 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90654 ']' 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90654 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90654 00:20:37.582 killing process with pid 90654 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90654' 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90654 00:20:37.582 10:14:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90654 00:20:38.959 10:14:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:38.959 00:20:38.959 real 0m6.841s 00:20:38.959 user 0m9.813s 00:20:38.959 sys 0m1.522s 00:20:38.959 10:14:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.959 10:14:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:38.959 ************************************ 00:20:38.959 END TEST bdev_nbd 00:20:38.959 ************************************ 00:20:39.219 10:14:53 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:39.219 10:14:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:39.219 10:14:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:39.219 10:14:53 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:39.219 10:14:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.219 10:14:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.219 10:14:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.219 ************************************ 00:20:39.219 START TEST bdev_fio 00:20:39.219 ************************************ 00:20:39.219 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:39.219 ************************************ 00:20:39.219 START TEST bdev_fio_rw_verify 00:20:39.219 ************************************ 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:39.219 10:14:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:39.478 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:39.478 fio-3.35 00:20:39.478 Starting 1 thread 00:20:51.685 00:20:51.685 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90867: Tue Nov 19 10:15:04 2024 00:20:51.685 read: IOPS=8671, BW=33.9MiB/s (35.5MB/s)(339MiB/10001msec) 00:20:51.685 slat (usec): min=24, max=120, avg=28.05, stdev= 3.53 00:20:51.685 clat (usec): min=13, max=411, avg=182.50, stdev=66.38 00:20:51.685 lat (usec): min=41, max=459, avg=210.55, stdev=66.88 00:20:51.685 clat percentiles (usec): 00:20:51.685 | 50.000th=[ 186], 99.000th=[ 306], 99.900th=[ 359], 99.990th=[ 379], 00:20:51.685 | 99.999th=[ 412] 00:20:51.685 write: IOPS=9141, BW=35.7MiB/s (37.4MB/s)(353MiB/9875msec); 0 zone resets 00:20:51.685 slat (usec): min=11, max=132, avg=22.97, stdev= 4.17 00:20:51.685 clat (usec): min=76, max=891, avg=422.84, stdev=52.40 00:20:51.685 lat (usec): min=97, max=1023, avg=445.81, stdev=53.49 00:20:51.685 clat percentiles (usec): 00:20:51.685 | 50.000th=[ 429], 99.000th=[ 562], 99.900th=[ 627], 99.990th=[ 766], 00:20:51.685 | 99.999th=[ 889] 00:20:51.685 bw ( KiB/s): min=33672, max=38752, per=98.69%, avg=36088.84, stdev=1734.22, samples=19 00:20:51.685 iops : min= 8418, max= 9688, avg=9022.21, stdev=433.55, samples=19 00:20:51.685 lat (usec) : 20=0.01%, 100=5.87%, 250=33.42%, 500=58.93%, 750=1.77% 00:20:51.685 lat (usec) : 1000=0.01% 00:20:51.685 cpu : usr=98.96%, sys=0.31%, ctx=23, majf=0, minf=7536 00:20:51.685 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.685 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.685 issued rwts: total=86719,90277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.685 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:51.685 00:20:51.685 Run status group 0 (all jobs): 00:20:51.685 READ: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=339MiB (355MB), run=10001-10001msec 00:20:51.685 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=353MiB (370MB), run=9875-9875msec 00:20:52.252 ----------------------------------------------------- 00:20:52.252 Suppressions used: 00:20:52.252 count bytes template 00:20:52.252 1 7 /usr/src/fio/parse.c 00:20:52.252 853 81888 /usr/src/fio/iolog.c 00:20:52.252 1 8 libtcmalloc_minimal.so 00:20:52.252 1 904 libcrypto.so 00:20:52.252 ----------------------------------------------------- 00:20:52.252 00:20:52.252 00:20:52.252 real 0m12.949s 00:20:52.252 user 0m13.193s 00:20:52.252 sys 0m0.843s 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:52.252 ************************************ 00:20:52.252 END TEST bdev_fio_rw_verify 00:20:52.252 ************************************ 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:52.252 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fa9b4107-d049-4f5e-8975-87b94c22fa0b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fa9b4107-d049-4f5e-8975-87b94c22fa0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fa9b4107-d049-4f5e-8975-87b94c22fa0b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2b41a403-8329-4958-ae0c-7202b473cd49",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "65da4e73-4e6d-4dab-8ef4-5d59377dd201",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3f1804ee-90e4-421d-ab4e-1c40e5b0cd5a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:52.253 /home/vagrant/spdk_repo/spdk 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:52.253 00:20:52.253 real 0m13.166s 00:20:52.253 user 0m13.290s 00:20:52.253 sys 0m0.943s 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.253 10:15:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:52.253 ************************************ 00:20:52.253 END TEST bdev_fio 00:20:52.253 ************************************ 00:20:52.253 10:15:06 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:52.253 10:15:06 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:52.253 10:15:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:52.253 10:15:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.253 10:15:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:52.253 ************************************ 00:20:52.253 START TEST bdev_verify 00:20:52.253 ************************************ 00:20:52.253 10:15:06 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:52.512 [2024-11-19 10:15:06.539291] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:52.512 [2024-11-19 10:15:06.539462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91027 ] 00:20:52.512 [2024-11-19 10:15:06.719685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:52.771 [2024-11-19 10:15:06.870878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.771 [2024-11-19 10:15:06.870893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.339 Running I/O for 5 seconds... 00:20:55.248 12644.00 IOPS, 49.39 MiB/s [2024-11-19T10:15:10.856Z] 13271.50 IOPS, 51.84 MiB/s [2024-11-19T10:15:11.793Z] 12913.00 IOPS, 50.44 MiB/s [2024-11-19T10:15:12.730Z] 13163.50 IOPS, 51.42 MiB/s [2024-11-19T10:15:12.730Z] 13018.20 IOPS, 50.85 MiB/s 00:20:58.498 Latency(us) 00:20:58.498 [2024-11-19T10:15:12.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.498 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:58.498 Verification LBA range: start 0x0 length 0x2000 00:20:58.498 raid5f : 5.01 6522.44 25.48 0.00 0.00 29642.27 266.24 23235.49 00:20:58.498 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:58.498 Verification LBA range: start 0x2000 length 0x2000 00:20:58.498 raid5f : 5.02 6469.18 25.27 0.00 0.00 29829.57 253.21 21567.30 00:20:58.498 [2024-11-19T10:15:12.730Z] =================================================================================================================== 00:20:58.498 [2024-11-19T10:15:12.730Z] Total : 12991.62 50.75 0.00 0.00 29735.59 253.21 23235.49 00:20:59.873 00:20:59.873 real 0m7.405s 00:20:59.873 user 0m13.530s 00:20:59.874 sys 0m0.364s 00:20:59.874 10:15:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.874 10:15:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:59.874 ************************************ 00:20:59.874 END TEST bdev_verify 00:20:59.874 ************************************ 00:20:59.874 10:15:13 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:59.874 10:15:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:59.874 10:15:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.874 10:15:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:59.874 ************************************ 00:20:59.874 START TEST bdev_verify_big_io 00:20:59.874 ************************************ 00:20:59.874 10:15:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:59.874 [2024-11-19 10:15:14.001028] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:20:59.874 [2024-11-19 10:15:14.001182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91120 ] 00:21:00.132 [2024-11-19 10:15:14.180592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:00.132 [2024-11-19 10:15:14.327643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.132 [2024-11-19 10:15:14.327645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.755 Running I/O for 5 seconds... 00:21:03.068 506.00 IOPS, 31.62 MiB/s [2024-11-19T10:15:18.236Z] 634.00 IOPS, 39.62 MiB/s [2024-11-19T10:15:19.171Z] 676.67 IOPS, 42.29 MiB/s [2024-11-19T10:15:20.104Z] 698.00 IOPS, 43.62 MiB/s [2024-11-19T10:15:20.363Z] 736.20 IOPS, 46.01 MiB/s 00:21:06.131 Latency(us) 00:21:06.131 [2024-11-19T10:15:20.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.131 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.131 Verification LBA range: start 0x0 length 0x200 00:21:06.131 raid5f : 5.28 384.80 24.05 0.00 0.00 8230697.66 187.11 360328.84 00:21:06.131 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.131 Verification LBA range: start 0x200 length 0x200 00:21:06.131 raid5f : 5.26 373.91 23.37 0.00 0.00 8421748.17 222.49 369861.35 00:21:06.131 [2024-11-19T10:15:20.363Z] =================================================================================================================== 00:21:06.131 [2024-11-19T10:15:20.363Z] Total : 758.71 47.42 0.00 0.00 8324670.24 187.11 369861.35 00:21:07.508 00:21:07.508 real 0m7.709s 00:21:07.508 user 0m14.122s 00:21:07.508 sys 0m0.363s 00:21:07.508 10:15:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.508 10:15:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.508 ************************************ 00:21:07.508 END TEST bdev_verify_big_io 00:21:07.508 ************************************ 00:21:07.508 10:15:21 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:07.508 10:15:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:07.508 10:15:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.508 10:15:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:07.508 ************************************ 00:21:07.508 START TEST bdev_write_zeroes 00:21:07.508 ************************************ 00:21:07.508 10:15:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:07.767 [2024-11-19 10:15:21.765587] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:07.767 [2024-11-19 10:15:21.765770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91223 ] 00:21:07.767 [2024-11-19 10:15:21.942248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.026 [2024-11-19 10:15:22.079244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.594 Running I/O for 1 seconds... 00:21:09.529 19959.00 IOPS, 77.96 MiB/s 00:21:09.529 Latency(us) 00:21:09.529 [2024-11-19T10:15:23.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.529 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:09.529 raid5f : 1.01 19941.26 77.90 0.00 0.00 6393.78 2040.55 8460.10 00:21:09.529 [2024-11-19T10:15:23.761Z] =================================================================================================================== 00:21:09.529 [2024-11-19T10:15:23.761Z] Total : 19941.26 77.90 0.00 0.00 6393.78 2040.55 8460.10 00:21:10.908 00:21:10.908 real 0m3.346s 00:21:10.908 user 0m2.883s 00:21:10.908 sys 0m0.331s 00:21:10.908 10:15:25 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.908 10:15:25 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:10.908 ************************************ 00:21:10.908 END TEST bdev_write_zeroes 00:21:10.908 ************************************ 00:21:10.908 10:15:25 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:10.908 10:15:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:10.908 10:15:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.908 10:15:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:10.908 ************************************ 00:21:10.908 START TEST bdev_json_nonenclosed 00:21:10.908 ************************************ 00:21:10.908 10:15:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:11.167 [2024-11-19 10:15:25.177241] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:11.167 [2024-11-19 10:15:25.177428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91272 ] 00:21:11.167 [2024-11-19 10:15:25.368848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.426 [2024-11-19 10:15:25.535877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.426 [2024-11-19 10:15:25.536027] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:11.426 [2024-11-19 10:15:25.536077] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:11.426 [2024-11-19 10:15:25.536095] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:11.684 00:21:11.684 real 0m0.745s 00:21:11.684 user 0m0.482s 00:21:11.684 sys 0m0.158s 00:21:11.684 10:15:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.684 10:15:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:11.684 ************************************ 00:21:11.684 END TEST bdev_json_nonenclosed 00:21:11.684 ************************************ 00:21:11.685 10:15:25 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:11.685 10:15:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:11.685 10:15:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.685 10:15:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:11.685 ************************************ 00:21:11.685 START TEST bdev_json_nonarray 00:21:11.685 ************************************ 00:21:11.685 10:15:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:11.944 [2024-11-19 10:15:25.964068] Starting SPDK v25.01-pre git sha1 fc96810c2 / DPDK 24.03.0 initialization... 00:21:11.944 [2024-11-19 10:15:25.964255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91303 ] 00:21:11.944 [2024-11-19 10:15:26.147779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.203 [2024-11-19 10:15:26.292203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.203 [2024-11-19 10:15:26.292344] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:12.203 [2024-11-19 10:15:26.292374] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:12.203 [2024-11-19 10:15:26.292404] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:12.462 00:21:12.462 real 0m0.711s 00:21:12.462 user 0m0.459s 00:21:12.462 sys 0m0.147s 00:21:12.462 10:15:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.462 ************************************ 00:21:12.462 10:15:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 END TEST bdev_json_nonarray 00:21:12.462 ************************************ 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:12.462 10:15:26 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:12.462 ************************************ 00:21:12.462 END TEST blockdev_raid5f 00:21:12.462 ************************************ 00:21:12.462 00:21:12.462 real 0m50.591s 00:21:12.462 user 1m8.769s 00:21:12.462 sys 0m5.832s 00:21:12.462 10:15:26 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.462 10:15:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 10:15:26 -- spdk/autotest.sh@194 -- # uname -s 00:21:12.462 10:15:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:12.462 10:15:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:12.462 10:15:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:12.462 10:15:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:12.462 10:15:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:12.462 10:15:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:12.462 10:15:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.462 10:15:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.722 10:15:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:12.722 10:15:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:12.722 10:15:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:12.722 10:15:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:12.722 10:15:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:12.722 10:15:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:12.722 10:15:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:12.722 10:15:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.722 10:15:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.722 10:15:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:12.722 10:15:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:12.722 10:15:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:12.722 10:15:26 -- common/autotest_common.sh@10 -- # set +x 00:21:14.626 INFO: APP EXITING 00:21:14.626 INFO: killing all VMs 00:21:14.626 INFO: killing vhost app 00:21:14.626 INFO: EXIT DONE 00:21:14.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.626 Waiting for block devices as requested 00:21:14.626 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:14.883 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:15.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:15.449 Cleaning 00:21:15.449 Removing: /var/run/dpdk/spdk0/config 00:21:15.449 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:15.449 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:15.449 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:15.449 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:15.449 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:15.449 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:15.449 Removing: /dev/shm/spdk_tgt_trace.pid56698 00:21:15.449 Removing: /var/run/dpdk/spdk0 00:21:15.449 Removing: /var/run/dpdk/spdk_pid56463 00:21:15.449 Removing: /var/run/dpdk/spdk_pid56698 00:21:15.708 Removing: /var/run/dpdk/spdk_pid56927 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57036 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57087 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57221 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57241 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57449 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57566 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57673 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57795 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57903 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57937 00:21:15.708 Removing: /var/run/dpdk/spdk_pid57979 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58055 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58161 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58625 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58694 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58763 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58783 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58929 00:21:15.708 Removing: /var/run/dpdk/spdk_pid58945 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59096 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59112 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59177 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59201 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59265 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59288 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59483 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59521 00:21:15.708 Removing: /var/run/dpdk/spdk_pid59610 00:21:15.708 Removing: /var/run/dpdk/spdk_pid60977 00:21:15.708 Removing: /var/run/dpdk/spdk_pid61194 00:21:15.708 Removing: /var/run/dpdk/spdk_pid61344 00:21:15.708 Removing: /var/run/dpdk/spdk_pid62005 00:21:15.708 Removing: /var/run/dpdk/spdk_pid62222 00:21:15.708 Removing: /var/run/dpdk/spdk_pid62368 00:21:15.708 Removing: /var/run/dpdk/spdk_pid63023 00:21:15.708 Removing: /var/run/dpdk/spdk_pid63364 00:21:15.708 Removing: /var/run/dpdk/spdk_pid63510 00:21:15.708 Removing: /var/run/dpdk/spdk_pid64928 00:21:15.708 Removing: /var/run/dpdk/spdk_pid65192 00:21:15.708 Removing: /var/run/dpdk/spdk_pid65338 00:21:15.708 Removing: /var/run/dpdk/spdk_pid66764 00:21:15.708 Removing: /var/run/dpdk/spdk_pid67018 00:21:15.708 Removing: /var/run/dpdk/spdk_pid67169 00:21:15.708 Removing: /var/run/dpdk/spdk_pid68583 00:21:15.708 Removing: /var/run/dpdk/spdk_pid69040 00:21:15.708 Removing: /var/run/dpdk/spdk_pid69186 00:21:15.708 Removing: /var/run/dpdk/spdk_pid70699 00:21:15.708 Removing: /var/run/dpdk/spdk_pid70971 00:21:15.708 Removing: /var/run/dpdk/spdk_pid71118 00:21:15.708 Removing: /var/run/dpdk/spdk_pid72646 00:21:15.708 Removing: /var/run/dpdk/spdk_pid72916 00:21:15.708 Removing: /var/run/dpdk/spdk_pid73066 00:21:15.708 Removing: /var/run/dpdk/spdk_pid74588 00:21:15.708 Removing: /var/run/dpdk/spdk_pid75082 00:21:15.708 Removing: /var/run/dpdk/spdk_pid75233 00:21:15.708 Removing: /var/run/dpdk/spdk_pid75377 00:21:15.708 Removing: /var/run/dpdk/spdk_pid75834 00:21:15.708 Removing: /var/run/dpdk/spdk_pid76604 00:21:15.708 Removing: /var/run/dpdk/spdk_pid77011 00:21:15.708 Removing: /var/run/dpdk/spdk_pid77737 00:21:15.708 Removing: /var/run/dpdk/spdk_pid78230 00:21:15.708 Removing: /var/run/dpdk/spdk_pid79028 00:21:15.708 Removing: /var/run/dpdk/spdk_pid79469 00:21:15.708 Removing: /var/run/dpdk/spdk_pid81469 00:21:15.708 Removing: /var/run/dpdk/spdk_pid81927 00:21:15.708 Removing: /var/run/dpdk/spdk_pid82383 00:21:15.708 Removing: /var/run/dpdk/spdk_pid84516 00:21:15.708 Removing: /var/run/dpdk/spdk_pid85007 00:21:15.708 Removing: /var/run/dpdk/spdk_pid85522 00:21:15.708 Removing: /var/run/dpdk/spdk_pid86605 00:21:15.708 Removing: /var/run/dpdk/spdk_pid86939 00:21:15.708 Removing: /var/run/dpdk/spdk_pid87897 00:21:15.708 Removing: /var/run/dpdk/spdk_pid88231 00:21:15.708 Removing: /var/run/dpdk/spdk_pid89188 00:21:15.708 Removing: /var/run/dpdk/spdk_pid89516 00:21:15.708 Removing: /var/run/dpdk/spdk_pid90201 00:21:15.708 Removing: /var/run/dpdk/spdk_pid90483 00:21:15.708 Removing: /var/run/dpdk/spdk_pid90550 00:21:15.708 Removing: /var/run/dpdk/spdk_pid90592 00:21:15.708 Removing: /var/run/dpdk/spdk_pid90852 00:21:15.708 Removing: /var/run/dpdk/spdk_pid91027 00:21:15.708 Removing: /var/run/dpdk/spdk_pid91120 00:21:15.708 Removing: /var/run/dpdk/spdk_pid91223 00:21:15.708 Removing: /var/run/dpdk/spdk_pid91272 00:21:15.708 Removing: /var/run/dpdk/spdk_pid91303 00:21:15.708 Clean 00:21:15.968 10:15:29 -- common/autotest_common.sh@1453 -- # return 0 00:21:15.968 10:15:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:15.968 10:15:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.968 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.968 10:15:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:15.968 10:15:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.968 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:21:15.968 10:15:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:15.968 10:15:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:15.968 10:15:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:15.968 10:15:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:15.968 10:15:30 -- spdk/autotest.sh@398 -- # hostname 00:21:15.968 10:15:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:16.226 geninfo: WARNING: invalid characters removed from testname! 00:21:42.767 10:15:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:46.113 10:16:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:49.398 10:16:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:51.931 10:16:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:54.463 10:16:08 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:57.751 10:16:11 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:00.283 10:16:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:00.283 10:16:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:00.283 10:16:14 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:00.283 10:16:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:00.283 10:16:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:00.283 10:16:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:00.283 + [[ -n 5204 ]] 00:22:00.283 + sudo kill 5204 00:22:00.292 [Pipeline] } 00:22:00.308 [Pipeline] // timeout 00:22:00.314 [Pipeline] } 00:22:00.331 [Pipeline] // stage 00:22:00.338 [Pipeline] } 00:22:00.353 [Pipeline] // catchError 00:22:00.364 [Pipeline] stage 00:22:00.367 [Pipeline] { (Stop VM) 00:22:00.380 [Pipeline] sh 00:22:00.660 + vagrant halt 00:22:04.851 ==> default: Halting domain... 00:22:10.128 [Pipeline] sh 00:22:10.407 + vagrant destroy -f 00:22:14.594 ==> default: Removing domain... 00:22:14.607 [Pipeline] sh 00:22:14.891 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:14.901 [Pipeline] } 00:22:14.918 [Pipeline] // stage 00:22:14.924 [Pipeline] } 00:22:14.941 [Pipeline] // dir 00:22:14.947 [Pipeline] } 00:22:14.962 [Pipeline] // wrap 00:22:14.968 [Pipeline] } 00:22:14.981 [Pipeline] // catchError 00:22:14.990 [Pipeline] stage 00:22:14.993 [Pipeline] { (Epilogue) 00:22:15.006 [Pipeline] sh 00:22:15.287 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:21.864 [Pipeline] catchError 00:22:21.866 [Pipeline] { 00:22:21.880 [Pipeline] sh 00:22:22.161 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:22.161 Artifacts sizes are good 00:22:22.170 [Pipeline] } 00:22:22.185 [Pipeline] // catchError 00:22:22.198 [Pipeline] archiveArtifacts 00:22:22.205 Archiving artifacts 00:22:22.308 [Pipeline] cleanWs 00:22:22.320 [WS-CLEANUP] Deleting project workspace... 00:22:22.320 [WS-CLEANUP] Deferred wipeout is used... 00:22:22.326 [WS-CLEANUP] done 00:22:22.328 [Pipeline] } 00:22:22.345 [Pipeline] // stage 00:22:22.350 [Pipeline] } 00:22:22.365 [Pipeline] // node 00:22:22.371 [Pipeline] End of Pipeline 00:22:22.414 Finished: SUCCESS